In this assignment, you will apply your knowledge of CNNs we want to estimate the growth stage of weeds using the number of leaves of the plant. The more leaves, the more the weed has grown.
The purpose of this assignment is to gain experience building and training neural networks. You will gain:
You must use Keras with the Tensorflow backend, i.e., the package tensorflow.keras. For this assignment, you may use other tensorflow packages and scikit-learn, scikit-image or pandas but not other deep learning frameworks, e.g., pytorch, mxnet etc.
Submit your Jupyter notebook .ipynb file using Brightspace. Do not include any other files or images as they will not be reviewed.
Make certain that you run all the cells in the notebook you submit or you will loose marks.
The data for this assignment are plant images at different resolutions captured with a variety of cameras. There are images showing plants with approximatelty 1,2,3,4 and 6 leafs. The images are part of a Leaf counting dataset by Teimouri et al. [1] which can be downloaded from the Aarhus University, Denmark:
Leaf counting dataset (Required files are posted on Brightspace)
However, you must work with the subset of images posted on BrightSpace as training.zip and testing.zip. There are 200 images for each of the 5 classes. As Figure 1 shows, there is a great variety of plants and image conditions. The dataset is split into a training and a testing set where there are 180 images per class for training and validation; and 20 images for testing.
In this section:
!pip install keras
!pip install tensorflow
!pip install -U scikit-learn
!pip install seaborn
Requirement already satisfied: keras in /opt/conda/lib/python3.7/site-packages (2.6.0) WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Requirement already satisfied: tensorflow in /opt/conda/lib/python3.7/site-packages (2.6.4) Requirement already satisfied: flatbuffers~=1.12.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow) (1.12) Collecting tensorboard<2.7,>=2.6.0 Downloading tensorboard-2.6.0-py3-none-any.whl (5.6 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.6/5.6 MB 8.4 MB/s eta 0:00:0000:0100:010m Requirement already satisfied: six~=1.15.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow) (1.15.0) Requirement already satisfied: wheel~=0.35 in /opt/conda/lib/python3.7/site-packages (from tensorflow) (0.37.1) Collecting h5py~=3.1.0 Downloading h5py-3.1.0-cp37-cp37m-manylinux1_x86_64.whl (4.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.0/4.0 MB 32.9 MB/s eta 0:00:0000:01:00:01 Requirement already satisfied: google-pasta~=0.2 in /opt/conda/lib/python3.7/site-packages (from tensorflow) (0.2.0) Requirement already satisfied: keras-preprocessing~=1.1.2 in /opt/conda/lib/python3.7/site-packages (from tensorflow) (1.1.2) Requirement already satisfied: astunparse~=1.6.3 in /opt/conda/lib/python3.7/site-packages (from tensorflow) (1.6.3) Requirement already satisfied: tensorflow-estimator<2.7,>=2.6.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow) (2.6.0) Requirement already satisfied: termcolor~=1.1.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow) (1.1.0) Requirement already satisfied: gast==0.4.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow) (0.4.0) Requirement already satisfied: wrapt~=1.12.1 in /opt/conda/lib/python3.7/site-packages (from tensorflow) (1.12.1) Requirement already satisfied: keras<2.7,>=2.6.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow) (2.6.0) Collecting typing-extensions<3.11,>=3.7 Downloading typing_extensions-3.10.0.2-py3-none-any.whl (26 kB) Requirement already satisfied: protobuf>=3.9.2 in /opt/conda/lib/python3.7/site-packages (from tensorflow) (3.19.4) Requirement already satisfied: clang~=5.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow) (5.0) Requirement already satisfied: grpcio<2.0,>=1.37.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow) (1.43.0) Requirement already satisfied: opt-einsum~=3.3.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow) (3.3.0) Collecting numpy~=1.19.2 Downloading numpy-1.19.5-cp37-cp37m-manylinux2010_x86_64.whl (14.8 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.8/14.8 MB 29.6 MB/s eta 0:00:0000:0100:01 Requirement already satisfied: absl-py~=0.10 in /opt/conda/lib/python3.7/site-packages (from tensorflow) (0.15.0) Requirement already satisfied: cached-property in /opt/conda/lib/python3.7/site-packages (from h5py~=3.1.0->tensorflow) (1.5.2) Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /opt/conda/lib/python3.7/site-packages (from tensorboard<2.7,>=2.6.0->tensorflow) (0.4.6) Requirement already satisfied: setuptools>=41.0.0 in /opt/conda/lib/python3.7/site-packages (from tensorboard<2.7,>=2.6.0->tensorflow) (59.8.0) Requirement already satisfied: werkzeug>=0.11.15 in /opt/conda/lib/python3.7/site-packages (from tensorboard<2.7,>=2.6.0->tensorflow) (2.2.2) Requirement already satisfied: google-auth<2,>=1.6.3 in /opt/conda/lib/python3.7/site-packages (from tensorboard<2.7,>=2.6.0->tensorflow) (1.35.0) Requirement already satisfied: markdown>=2.6.8 in /opt/conda/lib/python3.7/site-packages (from tensorboard<2.7,>=2.6.0->tensorflow) (3.3.7) Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from tensorboard<2.7,>=2.6.0->tensorflow) (1.8.1) Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /opt/conda/lib/python3.7/site-packages (from tensorboard<2.7,>=2.6.0->tensorflow) (0.6.1) Requirement already satisfied: requests<3,>=2.21.0 in /opt/conda/lib/python3.7/site-packages (from tensorboard<2.7,>=2.6.0->tensorflow) (2.28.1) Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth<2,>=1.6.3->tensorboard<2.7,>=2.6.0->tensorflow) (0.2.7) Requirement already satisfied: rsa<5,>=3.1.4 in /opt/conda/lib/python3.7/site-packages (from google-auth<2,>=1.6.3->tensorboard<2.7,>=2.6.0->tensorflow) (4.8) Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<2,>=1.6.3->tensorboard<2.7,>=2.6.0->tensorflow) (4.2.4) Requirement already satisfied: requests-oauthlib>=0.7.0 in /opt/conda/lib/python3.7/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.7,>=2.6.0->tensorflow) (1.3.1) Requirement already satisfied: importlib-metadata>=4.4 in /opt/conda/lib/python3.7/site-packages (from markdown>=2.6.8->tensorboard<2.7,>=2.6.0->tensorflow) (4.13.0) Requirement already satisfied: idna<4,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard<2.7,>=2.6.0->tensorflow) (3.3) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard<2.7,>=2.6.0->tensorflow) (1.26.12) Requirement already satisfied: charset-normalizer<3,>=2 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard<2.7,>=2.6.0->tensorflow) (2.1.0) Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard<2.7,>=2.6.0->tensorflow) (2022.9.24) Requirement already satisfied: MarkupSafe>=2.1.1 in /opt/conda/lib/python3.7/site-packages (from werkzeug>=0.11.15->tensorboard<2.7,>=2.6.0->tensorflow) (2.1.1) Requirement already satisfied: zipp>=0.5 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<2.7,>=2.6.0->tensorflow) (3.8.0) Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/conda/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<2.7,>=2.6.0->tensorflow) (0.4.8) Requirement already satisfied: oauthlib>=3.0.0 in /opt/conda/lib/python3.7/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.7,>=2.6.0->tensorflow) (3.2.0) Installing collected packages: typing-extensions, numpy, h5py, tensorboard Attempting uninstall: typing-extensions Found existing installation: typing_extensions 4.1.1 Uninstalling typing_extensions-4.1.1: Successfully uninstalled typing_extensions-4.1.1 Attempting uninstall: numpy Found existing installation: numpy 1.21.6 Uninstalling numpy-1.21.6: Successfully uninstalled numpy-1.21.6 Attempting uninstall: h5py Found existing installation: h5py 3.7.0 Uninstalling h5py-3.7.0: Successfully uninstalled h5py-3.7.0 Attempting uninstall: tensorboard Found existing installation: tensorboard 2.10.1 Uninstalling tensorboard-2.10.1: Successfully uninstalled tensorboard-2.10.1 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. tensorflow-io 0.21.0 requires tensorflow-io-gcs-filesystem==0.21.0, which is not installed. dask-cudf 21.10.1 requires cupy-cuda114, which is not installed. beatrix-jupyterlab 3.1.7 requires google-cloud-bigquery-storage, which is not installed. xarray-einstats 0.2.2 requires numpy>=1.21, but you have numpy 1.19.5 which is incompatible. tfx-bsl 1.9.0 requires tensorflow!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,!=2.8.*,<3,>=1.15.5, but you have tensorflow 2.6.4 which is incompatible. tensorflow-transform 1.9.0 requires tensorflow!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,!=2.8.*,<2.10,>=1.15.5, but you have tensorflow 2.6.4 which is incompatible. tensorflow-serving-api 2.9.0 requires tensorflow<3,>=2.9.0, but you have tensorflow 2.6.4 which is incompatible. rich 12.6.0 requires typing-extensions<5.0,>=4.0.0; python_version < "3.9", but you have typing-extensions 3.10.0.2 which is incompatible. pytorch-lightning 1.7.7 requires tensorboard>=2.9.1, but you have tensorboard 2.6.0 which is incompatible. pytorch-lightning 1.7.7 requires typing-extensions>=4.0.0, but you have typing-extensions 3.10.0.2 which is incompatible. pytools 2022.1.12 requires typing-extensions>=4.0; python_version < "3.11", but you have typing-extensions 3.10.0.2 which is incompatible. pdpbox 0.2.1 requires matplotlib==3.1.1, but you have matplotlib 3.5.3 which is incompatible. pandas-profiling 3.1.0 requires markupsafe~=2.0.1, but you have markupsafe 2.1.1 which is incompatible. nnabla 1.31.0 requires numpy>=1.20.0, but you have numpy 1.19.5 which is incompatible. jaxlib 0.3.22+cuda11.cudnn805 requires numpy>=1.20, but you have numpy 1.19.5 which is incompatible. jax 0.3.23 requires numpy>=1.20, but you have numpy 1.19.5 which is incompatible. flax 0.6.1 requires typing-extensions>=4.1.1, but you have typing-extensions 3.10.0.2 which is incompatible. flake8 4.0.1 requires importlib-metadata<4.3; python_version < "3.8", but you have importlib-metadata 4.13.0 which is incompatible. featuretools 1.11.1 requires numpy>=1.21.0, but you have numpy 1.19.5 which is incompatible. dask-cudf 21.10.1 requires dask==2021.09.1, but you have dask 2022.2.0 which is incompatible. dask-cudf 21.10.1 requires distributed==2021.09.1, but you have distributed 2022.2.0 which is incompatible. cmdstanpy 1.0.7 requires numpy>=1.21, but you have numpy 1.19.5 which is incompatible. apache-beam 2.40.0 requires dill<0.3.2,>=0.3.1.1, but you have dill 0.3.5.1 which is incompatible. allennlp 2.10.1 requires h5py>=3.6.0, but you have h5py 3.1.0 which is incompatible. allennlp 2.10.1 requires numpy>=1.21.4, but you have numpy 1.19.5 which is incompatible. aioitertools 0.11.0 requires typing_extensions>=4.0; python_version < "3.10", but you have typing-extensions 3.10.0.2 which is incompatible. aiobotocore 2.4.0 requires botocore<1.27.60,>=1.27.59, but you have botocore 1.27.93 which is incompatible. Successfully installed h5py-3.1.0 numpy-1.19.5 tensorboard-2.6.0 typing-extensions-3.10.0.2 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Requirement already satisfied: scikit-learn in /opt/conda/lib/python3.7/site-packages (1.0.2) Requirement already satisfied: joblib>=0.11 in /opt/conda/lib/python3.7/site-packages (from scikit-learn) (1.0.1) Requirement already satisfied: numpy>=1.14.6 in /opt/conda/lib/python3.7/site-packages (from scikit-learn) (1.19.5) Requirement already satisfied: scipy>=1.1.0 in /opt/conda/lib/python3.7/site-packages (from scikit-learn) (1.7.3) Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from scikit-learn) (3.1.0) WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Requirement already satisfied: seaborn in /opt/conda/lib/python3.7/site-packages (0.11.2) Requirement already satisfied: pandas>=0.23 in /opt/conda/lib/python3.7/site-packages (from seaborn) (1.3.5) Requirement already satisfied: scipy>=1.0 in /opt/conda/lib/python3.7/site-packages (from seaborn) (1.7.3) Requirement already satisfied: numpy>=1.15 in /opt/conda/lib/python3.7/site-packages (from seaborn) (1.19.5) Requirement already satisfied: matplotlib>=2.2 in /opt/conda/lib/python3.7/site-packages (from seaborn) (3.5.3) Requirement already satisfied: python-dateutil>=2.7 in /opt/conda/lib/python3.7/site-packages (from matplotlib>=2.2->seaborn) (2.8.2) Requirement already satisfied: pyparsing>=2.2.1 in /opt/conda/lib/python3.7/site-packages (from matplotlib>=2.2->seaborn) (3.0.9) Requirement already satisfied: cycler>=0.10 in /opt/conda/lib/python3.7/site-packages (from matplotlib>=2.2->seaborn) (0.11.0) Requirement already satisfied: kiwisolver>=1.0.1 in /opt/conda/lib/python3.7/site-packages (from matplotlib>=2.2->seaborn) (1.4.3) Requirement already satisfied: packaging>=20.0 in /opt/conda/lib/python3.7/site-packages (from matplotlib>=2.2->seaborn) (21.3) Requirement already satisfied: pillow>=6.2.0 in /opt/conda/lib/python3.7/site-packages (from matplotlib>=2.2->seaborn) (9.1.1) Requirement already satisfied: fonttools>=4.22.0 in /opt/conda/lib/python3.7/site-packages (from matplotlib>=2.2->seaborn) (4.33.3) Requirement already satisfied: pytz>=2017.3 in /opt/conda/lib/python3.7/site-packages (from pandas>=0.23->seaborn) (2022.1) Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.7/site-packages (from kiwisolver>=1.0.1->matplotlib>=2.2->seaborn) (3.10.0.2) Requirement already satisfied: six>=1.5 in /opt/conda/lib/python3.7/site-packages (from python-dateutil>=2.7->matplotlib>=2.2->seaborn) (1.15.0) WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
# Prepare your dataset here
# importing the required libraries
import os
import cv2
import time
import keras
import numpy as np
import pandas as pd
import seaborn as sns
from keras.models import Model
import matplotlib.pyplot as plt
from keras.applications.vgg16 import VGG16
from sklearn.metrics import confusion_matrix
from keras.layers import Dropout, InputLayer
from mpl_toolkits.axes_grid1 import ImageGrid
from keras.layers import ELU, PReLU, LeakyReLU
from keras.layers import Input, Dense, Flatten
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import BatchNormalization, Conv2D, MaxPooling2D
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
def draw_func(list_of_imgs):
try:
fig = plt.figure(figsize=(30, 30))
grid = ImageGrid(fig, 111, nrows_ncols=(1, 5), axes_pad=0.1)
for ax, im in zip(grid, list_of_imgs):
ax.imshow(im)
except:
pass
plt.show()
def Plot(X,Y,Label,Color, Marker , S , Xlabel , Ylabel , Title):
plt.plot(X, Y, label = Label, c = Color)
plt.scatter(X,Y, c=Color, marker = Marker , s=S)
plt.xlabel(Xlabel)
plt.ylabel(Ylabel)
plt.title(Title)
plt.legend()
# plt.show()
return plt
def ConfusionMatrix(Y_Actual, Y_Pred):
CF = confusion_matrix(Y_Actual, Y_Pred)
return CF
# to Plot Confusion Matrix
def PLOT_ConfusionMatrix(CF,Title):
sns.heatmap(CF, annot=True, fmt='d')
plt.title(Title, fontsize = 15)
plt.xlabel('Predicted', fontsize = 15)
plt.ylabel('Actual', fontsize = 15)
return plt.show()
############################################################################### Q1
path_train = r"/kaggle/input/plants-classification/training/training"
path_test = r"/kaggle/input/plants-classification/testing/testing"
print(os.listdir(path_train))
start = time.time()
folders_names = os.listdir(path_train)
images_names_train = []
images_names_test = []
x_train = []
x_test = []
for i in range(len(folders_names)):
images_names_train.append(os.listdir(path_train+'//'+folders_names[i]))
images_names_test.append(os.listdir(path_test+'//'+folders_names[i]))
for i in range(len(folders_names)):
for j in range(len(images_names_train[i])):
img = cv2.imread(path_train+'//'+folders_names[i]+'//'+ images_names_train[i][j])
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
x_train.append(img)
for i in range(len(folders_names)):
for j in range(len(images_names_test[i])):
img = cv2.imread(path_test+'//'+folders_names[i]+'//'+ images_names_test[i][j])
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
x_test.append(img)
y_train = []
y_test = []
for i in range(len(folders_names)):
for j in range(len(images_names_train[i])):
y_train.append(i)
for i in range(len(folders_names)):
for j in range(len(images_names_test[i])):
y_test.append(i)
x_train = np.array(x_train)
x_test = np.array(x_test)
y_train = np.array(y_train)
y_test = np.array(y_test)
stop = time.time()
print("Total time is :", stop - start)
############################################################################### Display 5 images from each class
start_time = time.time()
start = 0
end = 5
for i in range(5):
print("########################################## Class Number : ", i, " ##########################################")
# function to plot the images in grid
draw_func(x_train[start:end])
start = start + 200
end = end + 200
stop_time = time.time()
print("Total time is :", stop_time - start_time)
['2', '3', '1', '4', '6']
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:98: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. /opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:99: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
Total time is : 2.679757833480835 ########################################## Class Number : 0 ##########################################
########################################## Class Number : 1 ##########################################
########################################## Class Number : 2 ##########################################
########################################## Class Number : 3 ##########################################
########################################## Class Number : 4 ##########################################
Total time is : 5.043900728225708
print(x_train.shape)
print(x_test.shape)
(900,) (100,)
For this assignment, you are asked to use the Keras implementation of VGG-16 as a starting point.
Using the first 2 blocks of VGG-16 add extra Keras layers to create your own version of a CNN network for the classification of the images according to the number of leaves in the plant images. Note that there will be 5 classes. The last layer from VGG-16 will be block2 pool and you are allowed to add no more than five fully connected or convolutional layers to the network including the final output layer.
Note, it is highly recommended to use even smaller input images to try things out. You are not expected to fine-tune the initial VGG layers.
When your classifier is working:
# Write your code here
############################# Splitting the data
new_x_train, x_val, new_y_train, y_val = train_test_split(x_train, y_train, test_size=0.25, random_state=42)
print("the Shape of new x train set is : ", new_x_train.shape)
print("the Shape of the x validation set is : ", x_val.shape, "\n")
############################# images resizing (128,128,3)
print("############################ Before Resizing ############################")
plt.imshow(new_x_train[400])
plt.show()
resized_x_train_total = []
resized_x_train = []
resized_x_test = []
resized_x_val = []
for i in range(len(x_train)):
resized_x_train_total.append(cv2.resize(x_train[i], (128,128), interpolation = cv2.INTER_AREA))
for i in range(len(new_x_train)):
resized_x_train.append(cv2.resize(new_x_train[i], (128,128), interpolation = cv2.INTER_AREA))
for i in range(len(x_test)):
resized_x_test.append(cv2.resize(x_test[i], (128,128), interpolation = cv2.INTER_AREA))
for i in range(len(x_val)):
resized_x_val.append(cv2.resize(x_val[i], (128,128), interpolation = cv2.INTER_AREA))
resized_x_train_total = np.array(resized_x_train_total)
resized_x_train = np.array(resized_x_train)
resized_x_test = np.array(resized_x_test)
resized_x_val = np.array(resized_x_val)
print("############################ After Resizing ############################")
plt.imshow(resized_x_train[400])
########################## Data shuffling
from sklearn.utils import shuffle
resized_x_train, new_y_train = shuffle(resized_x_train, new_y_train, random_state=42)
resized_x_train_total, y_train = shuffle(resized_x_train_total, y_train, random_state=42)
resized_x_test, y_test = shuffle(resized_x_test, y_test, random_state=42)
resized_x_val, y_val = shuffle(resized_x_val, y_val, random_state=42)
the Shape of new x train set is : (675,) the Shape of the x validation set is : (225,) ############################ Before Resizing ############################
############################ After Resizing ############################
import tensorflow
from tensorflow import keras
model_Vgg = keras.models.Sequential()
architecture = VGG16(include_top=False, input_shape=(128,128,3), weights='imagenet', classes=5, pooling="avg")
##################### Freez all layers
for layer in architecture.layers:
layer.trainable = False
for i in range(0,7):
model_Vgg.add(architecture.layers[i])
# Normalization
resized_x_train = resized_x_train/255
resized_x_val = resized_x_val/255
model_Vgg.add(Conv2D(128, (3, 3), activation="linear",padding='valid')) # layer 1
model_Vgg.add(Conv2D(16, (1, 1), activation=keras.layers.LeakyReLU(),padding='valid')) # 1x1 conv layer
model_Vgg.add(MaxPooling2D(pool_size=(2, 2),padding='valid')) # max pooling layer
model_Vgg.add(Flatten())
model_Vgg.add(Dense(1024, activation=keras.layers.LeakyReLU())) # layer 2
model_Vgg.add(Dense(512, activation=keras.layers.LeakyReLU())) # layer 3
model_Vgg.add(Dense(256, activation=keras.layers.LeakyReLU())) # layer 4
model_Vgg.add(Dense(5, activation='softmax')) # layer 5
model_Vgg.compile(loss='sparse_categorical_crossentropy', optimizer="adam", metrics=['accuracy'])
print(model_Vgg.summary())
history_Vgg = model_Vgg.fit(resized_x_train, new_y_train, epochs=30, batch_size = 32, validation_data=(resized_x_val, y_val))
loss_training = history_Vgg.history['loss']
loss_test = history_Vgg.history['val_loss']
accuracy_training = history_Vgg.history['accuracy']
accuracy_test = history_Vgg.history['val_accuracy']
####################### Plotting
plt.plot(loss_test)
plt.plot(loss_training)
plt.xlabel("# Of Epochs")
plt.ylabel("Loss")
plt.legend(['val Loss', 'Train Loss'])
plt.show()
plt.plot(accuracy_test)
plt.plot(accuracy_training)
plt.xlabel("# Of Epochs")
plt.ylabel("Accuracy")
plt.legend(['val accuracy', 'Train accuracy'])
plt.show()
Model: "sequential_3" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= block1_conv1 (Conv2D) (None, 128, 128, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 128, 128, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 64, 64, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 64, 64, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 64, 64, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 32, 32, 128) 0 _________________________________________________________________ conv2d_6 (Conv2D) (None, 30, 30, 128) 147584 _________________________________________________________________ conv2d_7 (Conv2D) (None, 30, 30, 16) 2064 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 15, 15, 16) 0 _________________________________________________________________ flatten_3 (Flatten) (None, 3600) 0 _________________________________________________________________ dense_12 (Dense) (None, 1024) 3687424 _________________________________________________________________ dense_13 (Dense) (None, 512) 524800 _________________________________________________________________ dense_14 (Dense) (None, 256) 131328 _________________________________________________________________ dense_15 (Dense) (None, 5) 1285 ================================================================= Total params: 4,754,645 Trainable params: 4,494,485 Non-trainable params: 260,160 _________________________________________________________________ None Epoch 1/30 22/22 [==============================] - 2s 42ms/step - loss: 57.9119 - accuracy: 0.2089 - val_loss: 22.5523 - val_accuracy: 0.2578 Epoch 2/30 22/22 [==============================] - 0s 19ms/step - loss: 13.7848 - accuracy: 0.2000 - val_loss: 3.2964 - val_accuracy: 0.2711 Epoch 3/30 22/22 [==============================] - 0s 19ms/step - loss: 2.1498 - accuracy: 0.2933 - val_loss: 1.9184 - val_accuracy: 0.2489 Epoch 4/30 22/22 [==============================] - 0s 21ms/step - loss: 1.5477 - accuracy: 0.3970 - val_loss: 2.1971 - val_accuracy: 0.2800 Epoch 5/30 22/22 [==============================] - 0s 19ms/step - loss: 1.4007 - accuracy: 0.4519 - val_loss: 1.5869 - val_accuracy: 0.3244 Epoch 6/30 22/22 [==============================] - 0s 19ms/step - loss: 0.9661 - accuracy: 0.6148 - val_loss: 1.6954 - val_accuracy: 0.3156 Epoch 7/30 22/22 [==============================] - 0s 19ms/step - loss: 0.6993 - accuracy: 0.7333 - val_loss: 1.6185 - val_accuracy: 0.4222 Epoch 8/30 22/22 [==============================] - 0s 21ms/step - loss: 0.5849 - accuracy: 0.7896 - val_loss: 2.1832 - val_accuracy: 0.3822 Epoch 9/30 22/22 [==============================] - 0s 19ms/step - loss: 0.4039 - accuracy: 0.8756 - val_loss: 1.9544 - val_accuracy: 0.3956 Epoch 10/30 22/22 [==============================] - 0s 19ms/step - loss: 0.2518 - accuracy: 0.9200 - val_loss: 1.9826 - val_accuracy: 0.3911 Epoch 11/30 22/22 [==============================] - 0s 19ms/step - loss: 0.1742 - accuracy: 0.9556 - val_loss: 1.6874 - val_accuracy: 0.4222 Epoch 12/30 22/22 [==============================] - 0s 19ms/step - loss: 0.0867 - accuracy: 0.9867 - val_loss: 2.1390 - val_accuracy: 0.4178 Epoch 13/30 22/22 [==============================] - 0s 18ms/step - loss: 0.0798 - accuracy: 0.9867 - val_loss: 2.3751 - val_accuracy: 0.4044 Epoch 14/30 22/22 [==============================] - 0s 20ms/step - loss: 0.0696 - accuracy: 0.9852 - val_loss: 2.1181 - val_accuracy: 0.3911 Epoch 15/30 22/22 [==============================] - 0s 21ms/step - loss: 0.0216 - accuracy: 0.9985 - val_loss: 3.1490 - val_accuracy: 0.4044 Epoch 16/30 22/22 [==============================] - 0s 19ms/step - loss: 0.0170 - accuracy: 0.9970 - val_loss: 2.5625 - val_accuracy: 0.4356 Epoch 17/30 22/22 [==============================] - 0s 19ms/step - loss: 0.0051 - accuracy: 1.0000 - val_loss: 3.0984 - val_accuracy: 0.4311 Epoch 18/30 22/22 [==============================] - 0s 18ms/step - loss: 0.0026 - accuracy: 1.0000 - val_loss: 3.3215 - val_accuracy: 0.4178 Epoch 19/30 22/22 [==============================] - 0s 19ms/step - loss: 0.0017 - accuracy: 1.0000 - val_loss: 3.1323 - val_accuracy: 0.4133 Epoch 20/30 22/22 [==============================] - 0s 19ms/step - loss: 0.0011 - accuracy: 1.0000 - val_loss: 3.3919 - val_accuracy: 0.4267 Epoch 21/30 22/22 [==============================] - 0s 19ms/step - loss: 8.8288e-04 - accuracy: 1.0000 - val_loss: 3.3471 - val_accuracy: 0.4267 Epoch 22/30 22/22 [==============================] - 0s 19ms/step - loss: 7.6764e-04 - accuracy: 1.0000 - val_loss: 3.4016 - val_accuracy: 0.4267 Epoch 23/30 22/22 [==============================] - 0s 19ms/step - loss: 6.8718e-04 - accuracy: 1.0000 - val_loss: 3.4171 - val_accuracy: 0.4311 Epoch 24/30 22/22 [==============================] - 0s 19ms/step - loss: 6.1976e-04 - accuracy: 1.0000 - val_loss: 3.4653 - val_accuracy: 0.4356 Epoch 25/30 22/22 [==============================] - 0s 19ms/step - loss: 5.6414e-04 - accuracy: 1.0000 - val_loss: 3.5003 - val_accuracy: 0.4444 Epoch 26/30 22/22 [==============================] - 0s 19ms/step - loss: 5.1226e-04 - accuracy: 1.0000 - val_loss: 3.5307 - val_accuracy: 0.4400 Epoch 27/30 22/22 [==============================] - 0s 19ms/step - loss: 4.6924e-04 - accuracy: 1.0000 - val_loss: 3.5348 - val_accuracy: 0.4444 Epoch 28/30 22/22 [==============================] - 0s 19ms/step - loss: 4.3624e-04 - accuracy: 1.0000 - val_loss: 3.5677 - val_accuracy: 0.4400 Epoch 29/30 22/22 [==============================] - 0s 18ms/step - loss: 4.1552e-04 - accuracy: 1.0000 - val_loss: 3.6042 - val_accuracy: 0.4444 Epoch 30/30 22/22 [==============================] - 0s 19ms/step - loss: 3.8174e-04 - accuracy: 1.0000 - val_loss: 3.5995 - val_accuracy: 0.4400
print("################## Training Including Validation ##################")
y_pred_vgg_train = np.argmax(model_Vgg.predict(resized_x_train_total), axis=1)
cm1 = ConfusionMatrix(y_train, y_pred_vgg_train)
PLOT_ConfusionMatrix(cm1,"Training including validation set")
model_vgg_report = classification_report(y_train, y_pred_vgg_train)
print(model_vgg_report)
print("################## Test set ##################")
y_pred_vgg_test = np.argmax(model_Vgg.predict(resized_x_test), axis=1)
cm1 = ConfusionMatrix(y_test, y_pred_vgg_test)
PLOT_ConfusionMatrix(cm1,"Test set")
model_vgg_report = classification_report(y_test, y_pred_vgg_test)
print(model_vgg_report)
################## Training Including Validation ##################
precision recall f1-score support
0 0.83 0.77 0.80 180
1 0.66 0.76 0.71 180
2 1.00 0.29 0.45 180
3 0.58 0.86 0.70 180
4 0.73 0.84 0.78 180
accuracy 0.70 900
macro avg 0.76 0.70 0.69 900
weighted avg 0.76 0.70 0.69 900
################## Test set ##################
precision recall f1-score support
0 0.37 0.35 0.36 20
1 0.29 0.40 0.33 20
2 0.67 0.10 0.17 20
3 0.29 0.40 0.33 20
4 0.32 0.35 0.33 20
accuracy 0.32 100
macro avg 0.38 0.32 0.31 100
weighted avg 0.38 0.32 0.31 100
Repeat the steps of Part 1a. but reformulate as a regression problem, i.e., your network needs to output a single float value ranging between 0 to 6 corresponding to the number of leaves. Again, you are not expected to fine-tune the initial VGG layers.
The size of the training data is quite small. Discuss based on your learning curves if overfitting is occurring with the models from Parts 1a and 1b.
#Write your code here
start_time = time.time()
import tensorflow
from tensorflow import keras
import warnings
warnings.filterwarnings("ignore")
tensorflow.config.run_functions_eagerly(True)
from sklearn.metrics import accuracy_score
model_Vgg = keras.models.Sequential()
architecture = VGG16(include_top=False, input_shape=(128,128,3), weights='imagenet', classes=5, pooling="avg")
# my coustom metric
def round_preds(y_true, y_pred):
min = 0
max = 4
y_true = y_true.numpy()
y_pred = y_pred.numpy()
# print(y_pred)
y_pred = (y_pred - np.min(y_pred)) / (np.max(y_pred) - np.min(y_pred)) * (max - min) + min
y_pred = np.round_(y_pred)
# print(y_pred)
return accuracy_score(y_true, y_pred)
##################### Freez all layers
for layer in architecture.layers:
layer.trainable = False
for i in range(0,7):
model_Vgg.add(architecture.layers[i])
# Normalization
resized_x_train = resized_x_train/255
resized_x_val = resized_x_val/255
model_Vgg.add(Conv2D(128, (3, 3), activation="linear",padding='valid')) # layer 1
model_Vgg.add(Conv2D(16, (1, 1), activation=keras.layers.LeakyReLU(),padding='valid')) # 1x1 conv layer
model_Vgg.add(MaxPooling2D(pool_size=(2, 2),padding='valid')) # max pooling layer
model_Vgg.add(Flatten())
model_Vgg.add(Dense(1024, activation=keras.layers.LeakyReLU())) # layer 2
model_Vgg.add(Dense(512, activation=keras.layers.LeakyReLU())) # layer 3
model_Vgg.add(Dense(256, activation=keras.layers.LeakyReLU())) # layer 4
model_Vgg.add(Dense(1, activation="linear")) # layer 5
model_Vgg.compile(loss="mse", optimizer="adam", metrics=[round_preds])
print(model_Vgg.summary())
history_Vgg = model_Vgg.fit(resized_x_train, new_y_train, epochs=400, batch_size = 32, validation_data=(resized_x_val, y_val))
loss_training = history_Vgg.history['loss']
loss_test = history_Vgg.history['val_loss']
accuracy_training = history_Vgg.history["round_preds"]
accuracy_test = history_Vgg.history['val_round_preds']
####################### Plotting
plt.plot(loss_test)
plt.plot(loss_training)
plt.xlabel("# Of Epochs")
plt.ylabel("Loss")
plt.legend(['val Loss', 'Train Loss'])
plt.show()
plt.plot(accuracy_test)
plt.plot(accuracy_training)
plt.xlabel("# Of Epochs")
plt.ylabel("Accuracy")
plt.legend(['val accuracy', 'Train accuracy'])
plt.show()
stop_time = time.time()
print("Total time is :", stop_time - start_time)
Model: "sequential_6" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= block1_conv1 (Conv2D) (None, 128, 128, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 128, 128, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 64, 64, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 64, 64, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 64, 64, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 32, 32, 128) 0 _________________________________________________________________ conv2d_12 (Conv2D) (None, 30, 30, 128) 147584 _________________________________________________________________ conv2d_13 (Conv2D) (None, 30, 30, 16) 2064 _________________________________________________________________ max_pooling2d_6 (MaxPooling2 (None, 15, 15, 16) 0 _________________________________________________________________ flatten_6 (Flatten) (None, 3600) 0 _________________________________________________________________ dense_24 (Dense) (None, 1024) 3687424 _________________________________________________________________ dense_25 (Dense) (None, 512) 524800 _________________________________________________________________ dense_26 (Dense) (None, 256) 131328 _________________________________________________________________ dense_27 (Dense) (None, 1) 257 ================================================================= Total params: 4,753,617 Trainable params: 4,493,457 Non-trainable params: 260,160 _________________________________________________________________ None Epoch 1/200 20/20 [==============================] - 2s 108ms/step - loss: 9444.2373 - round_preds: 0.2209 - val_loss: 2691.4709 - val_round_preds: 0.2525 Epoch 2/200 20/20 [==============================] - 2s 98ms/step - loss: 496.7863 - round_preds: 0.2301 - val_loss: 148.6031 - val_round_preds: 0.2316 Epoch 3/200 20/20 [==============================] - 2s 98ms/step - loss: 38.2284 - round_preds: 0.2199 - val_loss: 20.9774 - val_round_preds: 0.1885 Epoch 4/200 20/20 [==============================] - 2s 97ms/step - loss: 10.6164 - round_preds: 0.2084 - val_loss: 4.3770 - val_round_preds: 0.1652 Epoch 5/200 20/20 [==============================] - 2s 99ms/step - loss: 3.1193 - round_preds: 0.2301 - val_loss: 2.0901 - val_round_preds: 0.2034 Epoch 6/200 20/20 [==============================] - 2s 112ms/step - loss: 2.5616 - round_preds: 0.2301 - val_loss: 2.0254 - val_round_preds: 0.1964 Epoch 7/200 20/20 [==============================] - 2s 96ms/step - loss: 6.3761 - round_preds: 0.2169 - val_loss: 2.0404 - val_round_preds: 0.1905 Epoch 8/200 20/20 [==============================] - 2s 112ms/step - loss: 4.5706 - round_preds: 0.2504 - val_loss: 8.1712 - val_round_preds: 0.2356 Epoch 9/200 20/20 [==============================] - 2s 112ms/step - loss: 4.9540 - round_preds: 0.2347 - val_loss: 5.6823 - val_round_preds: 0.2252 Epoch 10/200 20/20 [==============================] - 2s 96ms/step - loss: 4.6981 - round_preds: 0.2246 - val_loss: 3.1051 - val_round_preds: 0.2426 Epoch 11/200 20/20 [==============================] - 2s 95ms/step - loss: 2.4510 - round_preds: 0.2393 - val_loss: 6.0676 - val_round_preds: 0.2252 Epoch 12/200 20/20 [==============================] - 2s 120ms/step - loss: 3.4276 - round_preds: 0.2301 - val_loss: 2.0823 - val_round_preds: 0.2009 Epoch 13/200 20/20 [==============================] - 2s 97ms/step - loss: 3.6347 - round_preds: 0.2564 - val_loss: 8.5100 - val_round_preds: 0.2401 Epoch 14/200 20/20 [==============================] - 2s 95ms/step - loss: 4.8735 - round_preds: 0.2247 - val_loss: 2.0172 - val_round_preds: 0.2009 Epoch 15/200 20/20 [==============================] - 2s 111ms/step - loss: 4.2600 - round_preds: 0.2317 - val_loss: 2.4380 - val_round_preds: 0.2401 Epoch 16/200 20/20 [==============================] - 2s 111ms/step - loss: 3.0642 - round_preds: 0.2330 - val_loss: 5.0497 - val_round_preds: 0.2192 Epoch 17/200 20/20 [==============================] - 2s 100ms/step - loss: 3.3768 - round_preds: 0.2697 - val_loss: 4.1221 - val_round_preds: 0.2426 Epoch 18/200 20/20 [==============================] - 2s 112ms/step - loss: 3.7305 - round_preds: 0.2551 - val_loss: 4.7962 - val_round_preds: 0.2381 Epoch 19/200 20/20 [==============================] - 2s 96ms/step - loss: 3.1824 - round_preds: 0.2631 - val_loss: 5.4506 - val_round_preds: 0.2287 Epoch 20/200 20/20 [==============================] - 2s 114ms/step - loss: 5.9185 - round_preds: 0.2334 - val_loss: 7.3696 - val_round_preds: 0.2148 Epoch 21/200 20/20 [==============================] - 2s 111ms/step - loss: 2.4057 - round_preds: 0.2659 - val_loss: 1.8855 - val_round_preds: 0.2356 Epoch 22/200 20/20 [==============================] - 2s 114ms/step - loss: 2.7654 - round_preds: 0.2371 - val_loss: 4.1189 - val_round_preds: 0.2540 Epoch 23/200 20/20 [==============================] - 2s 96ms/step - loss: 3.0191 - round_preds: 0.2402 - val_loss: 3.7768 - val_round_preds: 0.2242 Epoch 24/200 20/20 [==============================] - 2s 112ms/step - loss: 2.5962 - round_preds: 0.2776 - val_loss: 3.3845 - val_round_preds: 0.2138 Epoch 25/200 20/20 [==============================] - 2s 112ms/step - loss: 3.0768 - round_preds: 0.2653 - val_loss: 3.0684 - val_round_preds: 0.2371 Epoch 26/200 20/20 [==============================] - 2s 111ms/step - loss: 4.4243 - round_preds: 0.2614 - val_loss: 2.4120 - val_round_preds: 0.2599 Epoch 27/200 20/20 [==============================] - 2s 120ms/step - loss: 2.7667 - round_preds: 0.2886 - val_loss: 4.2164 - val_round_preds: 0.2163 Epoch 28/200 20/20 [==============================] - 2s 96ms/step - loss: 3.2527 - round_preds: 0.2543 - val_loss: 6.3472 - val_round_preds: 0.2540 Epoch 29/200 20/20 [==============================] - 2s 112ms/step - loss: 5.0154 - round_preds: 0.2739 - val_loss: 5.4506 - val_round_preds: 0.2589 Epoch 30/200 20/20 [==============================] - 2s 111ms/step - loss: 5.7417 - round_preds: 0.2774 - val_loss: 3.1792 - val_round_preds: 0.2183 Epoch 31/200 20/20 [==============================] - 2s 98ms/step - loss: 2.4722 - round_preds: 0.2526 - val_loss: 6.2468 - val_round_preds: 0.2252 Epoch 32/200 20/20 [==============================] - 2s 111ms/step - loss: 3.1975 - round_preds: 0.2479 - val_loss: 1.9320 - val_round_preds: 0.1920 Epoch 33/200 20/20 [==============================] - 2s 111ms/step - loss: 2.2151 - round_preds: 0.2963 - val_loss: 2.6398 - val_round_preds: 0.2103 Epoch 34/200 20/20 [==============================] - 2s 97ms/step - loss: 5.0506 - round_preds: 0.2605 - val_loss: 42.9479 - val_round_preds: 0.2366 Epoch 35/200 20/20 [==============================] - 2s 95ms/step - loss: 15.2219 - round_preds: 0.2490 - val_loss: 4.4596 - val_round_preds: 0.2599 Epoch 36/200 20/20 [==============================] - 2s 95ms/step - loss: 3.0577 - round_preds: 0.2692 - val_loss: 1.8859 - val_round_preds: 0.2207 Epoch 37/200 20/20 [==============================] - 2s 111ms/step - loss: 2.3848 - round_preds: 0.2643 - val_loss: 1.8255 - val_round_preds: 0.2183 Epoch 38/200 20/20 [==============================] - 2s 112ms/step - loss: 2.9928 - round_preds: 0.2722 - val_loss: 2.3137 - val_round_preds: 0.2609 Epoch 39/200 20/20 [==============================] - 2s 111ms/step - loss: 2.2267 - round_preds: 0.2989 - val_loss: 1.9558 - val_round_preds: 0.2574 Epoch 40/200 20/20 [==============================] - 2s 111ms/step - loss: 2.8157 - round_preds: 0.2490 - val_loss: 7.2224 - val_round_preds: 0.2564 Epoch 41/200 20/20 [==============================] - 2s 112ms/step - loss: 2.9568 - round_preds: 0.3028 - val_loss: 1.7982 - val_round_preds: 0.2450 Epoch 42/200 20/20 [==============================] - 2s 120ms/step - loss: 1.7150 - round_preds: 0.2871 - val_loss: 2.0772 - val_round_preds: 0.2093 Epoch 43/200 20/20 [==============================] - 2s 112ms/step - loss: 2.7112 - round_preds: 0.2886 - val_loss: 4.6501 - val_round_preds: 0.2475 Epoch 44/200 20/20 [==============================] - 2s 96ms/step - loss: 2.8137 - round_preds: 0.2714 - val_loss: 2.7020 - val_round_preds: 0.2440 Epoch 45/200 20/20 [==============================] - 2s 112ms/step - loss: 2.1243 - round_preds: 0.2760 - val_loss: 1.7950 - val_round_preds: 0.2589 Epoch 46/200 20/20 [==============================] - 2s 112ms/step - loss: 1.8855 - round_preds: 0.3102 - val_loss: 1.7671 - val_round_preds: 0.2609 Epoch 47/200 20/20 [==============================] - 2s 113ms/step - loss: 1.5293 - round_preds: 0.3034 - val_loss: 1.8164 - val_round_preds: 0.2495 Epoch 48/200 20/20 [==============================] - 2s 96ms/step - loss: 2.3137 - round_preds: 0.3051 - val_loss: 1.8534 - val_round_preds: 0.2426 Epoch 49/200 20/20 [==============================] - 2s 111ms/step - loss: 2.3693 - round_preds: 0.3648 - val_loss: 3.0053 - val_round_preds: 0.2530 Epoch 50/200 20/20 [==============================] - 2s 111ms/step - loss: 3.7973 - round_preds: 0.3064 - val_loss: 4.6171 - val_round_preds: 0.2564 Epoch 51/200 20/20 [==============================] - 2s 111ms/step - loss: 3.2908 - round_preds: 0.2925 - val_loss: 2.3967 - val_round_preds: 0.2138 Epoch 52/200 20/20 [==============================] - 2s 111ms/step - loss: 1.8448 - round_preds: 0.3236 - val_loss: 3.0676 - val_round_preds: 0.2252 Epoch 53/200 20/20 [==============================] - 2s 112ms/step - loss: 2.1345 - round_preds: 0.3253 - val_loss: 3.0606 - val_round_preds: 0.2599 Epoch 54/200 20/20 [==============================] - 2s 116ms/step - loss: 2.6101 - round_preds: 0.3223 - val_loss: 2.2852 - val_round_preds: 0.2703 Epoch 55/200 20/20 [==============================] - 2s 96ms/step - loss: 2.5132 - round_preds: 0.3182 - val_loss: 3.6240 - val_round_preds: 0.2659 Epoch 56/200 20/20 [==============================] - 2s 96ms/step - loss: 2.3321 - round_preds: 0.3097 - val_loss: 3.1315 - val_round_preds: 0.2659 Epoch 57/200 20/20 [==============================] - 2s 121ms/step - loss: 2.1298 - round_preds: 0.3426 - val_loss: 1.9650 - val_round_preds: 0.2738 Epoch 58/200 20/20 [==============================] - 2s 99ms/step - loss: 2.7577 - round_preds: 0.3165 - val_loss: 2.4783 - val_round_preds: 0.2227 Epoch 59/200 20/20 [==============================] - 2s 101ms/step - loss: 1.8036 - round_preds: 0.3291 - val_loss: 4.5183 - val_round_preds: 0.2773 Epoch 60/200 20/20 [==============================] - 2s 112ms/step - loss: 1.6429 - round_preds: 0.3362 - val_loss: 2.3756 - val_round_preds: 0.2495 Epoch 61/200 20/20 [==============================] - 2s 112ms/step - loss: 1.7112 - round_preds: 0.3418 - val_loss: 2.3674 - val_round_preds: 0.2738 Epoch 62/200 20/20 [==============================] - 2s 97ms/step - loss: 1.8501 - round_preds: 0.3243 - val_loss: 2.3716 - val_round_preds: 0.2634 Epoch 63/200 20/20 [==============================] - 2s 113ms/step - loss: 1.5245 - round_preds: 0.3486 - val_loss: 2.4210 - val_round_preds: 0.2495 Epoch 64/200 20/20 [==============================] - 2s 104ms/step - loss: 3.1758 - round_preds: 0.3457 - val_loss: 1.7972 - val_round_preds: 0.2148 Epoch 65/200 20/20 [==============================] - 2s 113ms/step - loss: 2.0859 - round_preds: 0.3446 - val_loss: 2.0617 - val_round_preds: 0.2450 Epoch 66/200 20/20 [==============================] - 2s 113ms/step - loss: 1.8624 - round_preds: 0.3457 - val_loss: 2.5491 - val_round_preds: 0.2530 Epoch 67/200 20/20 [==============================] - 2s 112ms/step - loss: 1.5181 - round_preds: 0.3567 - val_loss: 1.7890 - val_round_preds: 0.2564 Epoch 68/200 20/20 [==============================] - 2s 112ms/step - loss: 1.3507 - round_preds: 0.3462 - val_loss: 1.9802 - val_round_preds: 0.2634 Epoch 69/200 20/20 [==============================] - 2s 115ms/step - loss: 2.2189 - round_preds: 0.3297 - val_loss: 1.7878 - val_round_preds: 0.2634 Epoch 70/200 20/20 [==============================] - 2s 112ms/step - loss: 1.3687 - round_preds: 0.3531 - val_loss: 1.7133 - val_round_preds: 0.2644 Epoch 71/200 20/20 [==============================] - 2s 101ms/step - loss: 1.3028 - round_preds: 0.3736 - val_loss: 1.7214 - val_round_preds: 0.2505 Epoch 72/200 20/20 [==============================] - 2s 118ms/step - loss: 1.6847 - round_preds: 0.3369 - val_loss: 2.0332 - val_round_preds: 0.2669 Epoch 73/200 20/20 [==============================] - 2s 112ms/step - loss: 1.4950 - round_preds: 0.3767 - val_loss: 1.7527 - val_round_preds: 0.2589 Epoch 74/200 20/20 [==============================] - 2s 104ms/step - loss: 1.6198 - round_preds: 0.3517 - val_loss: 3.1702 - val_round_preds: 0.2450 Epoch 75/200 20/20 [==============================] - 2s 112ms/step - loss: 1.8687 - round_preds: 0.3693 - val_loss: 1.8095 - val_round_preds: 0.2634 Epoch 76/200 20/20 [==============================] - 2s 112ms/step - loss: 1.2389 - round_preds: 0.3751 - val_loss: 1.8796 - val_round_preds: 0.2495 Epoch 77/200 20/20 [==============================] - 2s 96ms/step - loss: 1.4356 - round_preds: 0.3392 - val_loss: 1.9290 - val_round_preds: 0.2416 Epoch 78/200 20/20 [==============================] - 2s 96ms/step - loss: 2.7334 - round_preds: 0.3439 - val_loss: 1.8980 - val_round_preds: 0.2302 Epoch 79/200 20/20 [==============================] - 2s 95ms/step - loss: 1.5708 - round_preds: 0.3472 - val_loss: 2.6743 - val_round_preds: 0.2014 Epoch 80/200 20/20 [==============================] - 2s 116ms/step - loss: 1.6644 - round_preds: 0.3767 - val_loss: 1.7106 - val_round_preds: 0.2416 Epoch 81/200 20/20 [==============================] - 2s 112ms/step - loss: 1.4625 - round_preds: 0.3722 - val_loss: 2.4143 - val_round_preds: 0.2609 Epoch 82/200 20/20 [==============================] - 2s 112ms/step - loss: 1.4251 - round_preds: 0.3543 - val_loss: 1.8104 - val_round_preds: 0.2574 Epoch 83/200 20/20 [==============================] - 2s 112ms/step - loss: 1.2046 - round_preds: 0.3790 - val_loss: 1.7286 - val_round_preds: 0.2545 Epoch 84/200 20/20 [==============================] - 2s 112ms/step - loss: 1.2453 - round_preds: 0.3595 - val_loss: 3.7608 - val_round_preds: 0.2545 Epoch 85/200 20/20 [==============================] - 2s 117ms/step - loss: 1.8646 - round_preds: 0.3898 - val_loss: 2.0910 - val_round_preds: 0.2371 Epoch 86/200 20/20 [==============================] - 2s 120ms/step - loss: 1.2102 - round_preds: 0.3692 - val_loss: 2.0536 - val_round_preds: 0.2381 Epoch 87/200 20/20 [==============================] - 2s 112ms/step - loss: 1.2851 - round_preds: 0.3784 - val_loss: 1.7604 - val_round_preds: 0.2460 Epoch 88/200 20/20 [==============================] - 2s 112ms/step - loss: 0.9916 - round_preds: 0.3666 - val_loss: 1.7127 - val_round_preds: 0.2381 Epoch 89/200 20/20 [==============================] - 2s 96ms/step - loss: 1.2675 - round_preds: 0.3884 - val_loss: 1.8201 - val_round_preds: 0.2302 Epoch 90/200 20/20 [==============================] - 2s 116ms/step - loss: 1.1507 - round_preds: 0.4124 - val_loss: 1.7418 - val_round_preds: 0.2242 Epoch 91/200 20/20 [==============================] - 2s 115ms/step - loss: 1.2088 - round_preds: 0.4023 - val_loss: 3.5681 - val_round_preds: 0.2599 Epoch 92/200 20/20 [==============================] - 2s 112ms/step - loss: 1.2107 - round_preds: 0.4011 - val_loss: 1.9061 - val_round_preds: 0.2346 Epoch 93/200 20/20 [==============================] - 2s 112ms/step - loss: 1.2787 - round_preds: 0.3979 - val_loss: 1.9449 - val_round_preds: 0.2436 Epoch 94/200 20/20 [==============================] - 2s 96ms/step - loss: 1.0478 - round_preds: 0.4134 - val_loss: 1.9903 - val_round_preds: 0.2163 Epoch 95/200 20/20 [==============================] - 2s 117ms/step - loss: 1.0146 - round_preds: 0.3902 - val_loss: 2.7111 - val_round_preds: 0.2555 Epoch 96/200 20/20 [==============================] - 2s 112ms/step - loss: 1.0618 - round_preds: 0.3659 - val_loss: 2.0384 - val_round_preds: 0.2312 Epoch 97/200 20/20 [==============================] - 2s 97ms/step - loss: 1.0316 - round_preds: 0.4328 - val_loss: 1.8991 - val_round_preds: 0.2495 Epoch 98/200 20/20 [==============================] - 2s 112ms/step - loss: 1.2240 - round_preds: 0.4391 - val_loss: 3.7578 - val_round_preds: 0.2644 Epoch 99/200 20/20 [==============================] - 2s 112ms/step - loss: 2.3142 - round_preds: 0.4033 - val_loss: 2.4517 - val_round_preds: 0.2391 Epoch 100/200 20/20 [==============================] - 2s 117ms/step - loss: 1.2309 - round_preds: 0.3962 - val_loss: 1.7021 - val_round_preds: 0.2173 Epoch 101/200 20/20 [==============================] - 2s 117ms/step - loss: 0.8512 - round_preds: 0.4477 - val_loss: 1.7258 - val_round_preds: 0.2207 Epoch 102/200 20/20 [==============================] - 2s 112ms/step - loss: 0.9182 - round_preds: 0.4476 - val_loss: 1.7307 - val_round_preds: 0.2024 Epoch 103/200 20/20 [==============================] - 2s 112ms/step - loss: 1.0544 - round_preds: 0.4267 - val_loss: 3.5499 - val_round_preds: 0.2024 Epoch 104/200 20/20 [==============================] - 2s 112ms/step - loss: 1.3201 - round_preds: 0.3955 - val_loss: 2.2552 - val_round_preds: 0.2495 Epoch 105/200 20/20 [==============================] - 2s 103ms/step - loss: 0.9533 - round_preds: 0.4484 - val_loss: 1.9367 - val_round_preds: 0.2312 Epoch 106/200 20/20 [==============================] - 2s 112ms/step - loss: 1.0156 - round_preds: 0.4068 - val_loss: 2.2241 - val_round_preds: 0.2336 Epoch 107/200 20/20 [==============================] - 2s 112ms/step - loss: 1.1057 - round_preds: 0.4313 - val_loss: 1.6849 - val_round_preds: 0.2163 Epoch 108/200 20/20 [==============================] - 2s 96ms/step - loss: 0.7485 - round_preds: 0.4298 - val_loss: 1.7324 - val_round_preds: 0.2227 Epoch 109/200 20/20 [==============================] - 2s 112ms/step - loss: 0.8986 - round_preds: 0.4517 - val_loss: 1.8447 - val_round_preds: 0.2312 Epoch 110/200 20/20 [==============================] - 2s 101ms/step - loss: 0.7905 - round_preds: 0.4456 - val_loss: 1.7885 - val_round_preds: 0.2460 Epoch 111/200 20/20 [==============================] - 2s 96ms/step - loss: 0.6532 - round_preds: 0.4719 - val_loss: 2.1292 - val_round_preds: 0.2956 Epoch 112/200 20/20 [==============================] - 2s 112ms/step - loss: 1.1942 - round_preds: 0.4463 - val_loss: 3.5555 - val_round_preds: 0.2589 Epoch 113/200 20/20 [==============================] - 2s 112ms/step - loss: 1.5501 - round_preds: 0.4112 - val_loss: 1.6872 - val_round_preds: 0.2321 Epoch 114/200 20/20 [==============================] - 2s 112ms/step - loss: 1.0594 - round_preds: 0.4391 - val_loss: 2.1137 - val_round_preds: 0.2922 Epoch 115/200 20/20 [==============================] - 3s 128ms/step - loss: 1.6340 - round_preds: 0.4330 - val_loss: 3.1778 - val_round_preds: 0.2728 Epoch 116/200 20/20 [==============================] - 2s 114ms/step - loss: 0.9789 - round_preds: 0.4514 - val_loss: 1.6307 - val_round_preds: 0.2336 Epoch 117/200 20/20 [==============================] - 2s 97ms/step - loss: 0.7562 - round_preds: 0.4491 - val_loss: 2.4691 - val_round_preds: 0.2242 Epoch 118/200 20/20 [==============================] - 2s 112ms/step - loss: 0.6945 - round_preds: 0.4526 - val_loss: 2.7899 - val_round_preds: 0.2703 Epoch 119/200 20/20 [==============================] - 2s 97ms/step - loss: 1.0273 - round_preds: 0.4472 - val_loss: 1.6808 - val_round_preds: 0.2277 Epoch 120/200 20/20 [==============================] - 2s 121ms/step - loss: 0.8528 - round_preds: 0.4345 - val_loss: 1.7090 - val_round_preds: 0.2207 Epoch 121/200 20/20 [==============================] - 2s 101ms/step - loss: 0.6765 - round_preds: 0.5109 - val_loss: 1.6577 - val_round_preds: 0.2312 Epoch 122/200 20/20 [==============================] - 2s 97ms/step - loss: 0.5934 - round_preds: 0.5023 - val_loss: 2.1711 - val_round_preds: 0.2416 Epoch 123/200 20/20 [==============================] - 2s 112ms/step - loss: 0.7380 - round_preds: 0.5017 - val_loss: 2.8454 - val_round_preds: 0.2242 Epoch 124/200 20/20 [==============================] - 2s 112ms/step - loss: 0.9169 - round_preds: 0.4722 - val_loss: 1.8305 - val_round_preds: 0.2505 Epoch 125/200 20/20 [==============================] - 2s 115ms/step - loss: 0.6987 - round_preds: 0.4761 - val_loss: 1.6841 - val_round_preds: 0.2277 Epoch 126/200 20/20 [==============================] - 2s 112ms/step - loss: 0.9091 - round_preds: 0.5240 - val_loss: 2.8573 - val_round_preds: 0.2599 Epoch 127/200 20/20 [==============================] - 2s 112ms/step - loss: 1.8672 - round_preds: 0.4494 - val_loss: 2.7901 - val_round_preds: 0.2530 Epoch 128/200 20/20 [==============================] - 2s 96ms/step - loss: 1.2608 - round_preds: 0.4180 - val_loss: 2.4747 - val_round_preds: 0.2540 Epoch 129/200 20/20 [==============================] - 2s 112ms/step - loss: 1.6086 - round_preds: 0.4516 - val_loss: 3.8243 - val_round_preds: 0.2624 Epoch 130/200 20/20 [==============================] - 2s 123ms/step - loss: 1.9517 - round_preds: 0.4270 - val_loss: 3.7671 - val_round_preds: 0.2564 Epoch 131/200 20/20 [==============================] - 2s 96ms/step - loss: 0.9591 - round_preds: 0.4213 - val_loss: 2.3390 - val_round_preds: 0.2768 Epoch 132/200 20/20 [==============================] - 2s 111ms/step - loss: 1.0234 - round_preds: 0.4233 - val_loss: 1.6397 - val_round_preds: 0.2242 Epoch 133/200 20/20 [==============================] - 2s 112ms/step - loss: 1.0253 - round_preds: 0.4585 - val_loss: 2.3562 - val_round_preds: 0.2346 Epoch 134/200 20/20 [==============================] - 2s 97ms/step - loss: 1.2476 - round_preds: 0.4922 - val_loss: 2.1049 - val_round_preds: 0.2669 Epoch 135/200 20/20 [==============================] - 2s 112ms/step - loss: 0.6100 - round_preds: 0.5010 - val_loss: 1.9801 - val_round_preds: 0.2773 Epoch 136/200 20/20 [==============================] - 2s 113ms/step - loss: 0.7150 - round_preds: 0.4845 - val_loss: 2.3112 - val_round_preds: 0.2450 Epoch 137/200 20/20 [==============================] - 2s 113ms/step - loss: 0.7859 - round_preds: 0.5189 - val_loss: 1.7317 - val_round_preds: 0.2485 Epoch 138/200 20/20 [==============================] - 2s 96ms/step - loss: 0.6479 - round_preds: 0.5155 - val_loss: 1.9313 - val_round_preds: 0.3001 Epoch 139/200 20/20 [==============================] - 2s 112ms/step - loss: 0.5851 - round_preds: 0.5226 - val_loss: 1.6490 - val_round_preds: 0.3219 Epoch 140/200 20/20 [==============================] - 2s 112ms/step - loss: 0.7160 - round_preds: 0.5537 - val_loss: 2.3747 - val_round_preds: 0.2852 Epoch 141/200 20/20 [==============================] - 2s 118ms/step - loss: 0.5999 - round_preds: 0.5259 - val_loss: 1.9391 - val_round_preds: 0.2495 Epoch 142/200 20/20 [==============================] - 2s 96ms/step - loss: 0.6425 - round_preds: 0.5041 - val_loss: 2.5343 - val_round_preds: 0.2505 Epoch 143/200 20/20 [==============================] - 2s 115ms/step - loss: 0.7757 - round_preds: 0.5250 - val_loss: 1.9464 - val_round_preds: 0.2688 Epoch 144/200 20/20 [==============================] - 2s 112ms/step - loss: 0.9045 - round_preds: 0.5568 - val_loss: 2.2283 - val_round_preds: 0.2564 Epoch 145/200 20/20 [==============================] - 2s 100ms/step - loss: 1.2313 - round_preds: 0.5141 - val_loss: 2.1060 - val_round_preds: 0.2564 Epoch 146/200 20/20 [==============================] - 2s 119ms/step - loss: 0.6230 - round_preds: 0.5131 - val_loss: 2.6707 - val_round_preds: 0.2852 Epoch 147/200 20/20 [==============================] - 2s 111ms/step - loss: 0.6415 - round_preds: 0.5210 - val_loss: 1.7328 - val_round_preds: 0.2768 Epoch 148/200 20/20 [==============================] - 2s 113ms/step - loss: 0.4821 - round_preds: 0.5506 - val_loss: 1.7129 - val_round_preds: 0.2679 Epoch 149/200 20/20 [==============================] - 2s 113ms/step - loss: 0.3807 - round_preds: 0.5804 - val_loss: 1.8366 - val_round_preds: 0.2738 Epoch 150/200 20/20 [==============================] - 2s 112ms/step - loss: 0.6145 - round_preds: 0.5990 - val_loss: 1.6633 - val_round_preds: 0.2748 Epoch 151/200 20/20 [==============================] - 2s 115ms/step - loss: 0.4295 - round_preds: 0.5602 - val_loss: 1.7671 - val_round_preds: 0.2495 Epoch 152/200 20/20 [==============================] - 2s 113ms/step - loss: 0.4493 - round_preds: 0.6162 - val_loss: 1.7618 - val_round_preds: 0.2654 Epoch 153/200 20/20 [==============================] - 2s 96ms/step - loss: 0.5610 - round_preds: 0.5700 - val_loss: 2.2056 - val_round_preds: 0.2793 Epoch 154/200 20/20 [==============================] - 2s 112ms/step - loss: 0.6094 - round_preds: 0.5602 - val_loss: 1.6641 - val_round_preds: 0.2669 Epoch 155/200 20/20 [==============================] - 2s 113ms/step - loss: 0.7879 - round_preds: 0.5852 - val_loss: 2.1728 - val_round_preds: 0.2966 Epoch 156/200 20/20 [==============================] - 2s 118ms/step - loss: 0.7449 - round_preds: 0.6139 - val_loss: 1.6344 - val_round_preds: 0.3051 Epoch 157/200 20/20 [==============================] - 2s 112ms/step - loss: 0.4932 - round_preds: 0.6182 - val_loss: 1.7995 - val_round_preds: 0.2510 Epoch 158/200 20/20 [==============================] - 2s 96ms/step - loss: 0.3337 - round_preds: 0.6092 - val_loss: 2.0313 - val_round_preds: 0.2416 Epoch 159/200 20/20 [==============================] - 2s 106ms/step - loss: 0.4825 - round_preds: 0.6080 - val_loss: 1.8788 - val_round_preds: 0.2887 Epoch 160/200 20/20 [==============================] - 2s 97ms/step - loss: 0.4968 - round_preds: 0.6341 - val_loss: 2.7551 - val_round_preds: 0.2783 Epoch 161/200 20/20 [==============================] - 2s 117ms/step - loss: 0.4029 - round_preds: 0.6077 - val_loss: 1.7445 - val_round_preds: 0.2495 Epoch 162/200 20/20 [==============================] - 2s 114ms/step - loss: 0.2714 - round_preds: 0.6634 - val_loss: 1.7690 - val_round_preds: 0.2426 Epoch 163/200 20/20 [==============================] - 2s 112ms/step - loss: 0.2833 - round_preds: 0.6911 - val_loss: 3.0425 - val_round_preds: 0.2912 Epoch 164/200 20/20 [==============================] - 2s 112ms/step - loss: 0.6811 - round_preds: 0.6754 - val_loss: 2.7172 - val_round_preds: 0.2659 Epoch 165/200 20/20 [==============================] - 2s 112ms/step - loss: 0.7761 - round_preds: 0.6230 - val_loss: 1.7555 - val_round_preds: 0.2460 Epoch 166/200 20/20 [==============================] - 2s 115ms/step - loss: 0.4691 - round_preds: 0.5835 - val_loss: 3.4560 - val_round_preds: 0.2877 Epoch 167/200 20/20 [==============================] - 2s 96ms/step - loss: 0.8105 - round_preds: 0.5845 - val_loss: 1.7915 - val_round_preds: 0.2728 Epoch 168/200 20/20 [==============================] - 2s 112ms/step - loss: 0.4387 - round_preds: 0.5783 - val_loss: 1.7698 - val_round_preds: 0.2550 Epoch 169/200 20/20 [==============================] - 2s 112ms/step - loss: 0.4060 - round_preds: 0.6301 - val_loss: 1.9658 - val_round_preds: 0.2738 Epoch 170/200 20/20 [==============================] - 2s 96ms/step - loss: 0.4189 - round_preds: 0.6713 - val_loss: 3.8131 - val_round_preds: 0.2688 Epoch 171/200 20/20 [==============================] - 2s 118ms/step - loss: 0.5578 - round_preds: 0.5686 - val_loss: 1.9782 - val_round_preds: 0.2426 Epoch 172/200 20/20 [==============================] - 2s 112ms/step - loss: 0.6488 - round_preds: 0.5737 - val_loss: 2.1045 - val_round_preds: 0.2495 Epoch 173/200 20/20 [==============================] - 2s 98ms/step - loss: 0.4360 - round_preds: 0.5929 - val_loss: 1.8035 - val_round_preds: 0.2932 Epoch 174/200 20/20 [==============================] - 2s 103ms/step - loss: 1.0015 - round_preds: 0.6101 - val_loss: 1.9503 - val_round_preds: 0.2634 Epoch 175/200 20/20 [==============================] - 2s 96ms/step - loss: 0.4594 - round_preds: 0.6014 - val_loss: 2.0544 - val_round_preds: 0.2634 Epoch 176/200 20/20 [==============================] - 2s 115ms/step - loss: 0.4273 - round_preds: 0.5804 - val_loss: 1.8762 - val_round_preds: 0.2679 Epoch 177/200 20/20 [==============================] - 2s 113ms/step - loss: 0.5083 - round_preds: 0.6224 - val_loss: 1.9711 - val_round_preds: 0.2550 Epoch 178/200 20/20 [==============================] - 2s 96ms/step - loss: 1.1889 - round_preds: 0.6094 - val_loss: 1.7599 - val_round_preds: 0.2564 Epoch 179/200 20/20 [==============================] - 2s 112ms/step - loss: 0.8545 - round_preds: 0.5540 - val_loss: 1.8056 - val_round_preds: 0.2555 Epoch 180/200 20/20 [==============================] - 2s 112ms/step - loss: 0.4488 - round_preds: 0.6075 - val_loss: 1.7837 - val_round_preds: 0.2450 Epoch 181/200 20/20 [==============================] - 2s 112ms/step - loss: 0.4816 - round_preds: 0.6474 - val_loss: 1.8035 - val_round_preds: 0.2321 Epoch 182/200 20/20 [==============================] - 2s 112ms/step - loss: 0.9981 - round_preds: 0.5751 - val_loss: 1.9720 - val_round_preds: 0.2416 Epoch 183/200 20/20 [==============================] - 2s 96ms/step - loss: 0.8526 - round_preds: 0.5531 - val_loss: 2.2229 - val_round_preds: 0.2679 Epoch 184/200 20/20 [==============================] - 2s 112ms/step - loss: 0.4905 - round_preds: 0.6091 - val_loss: 1.6813 - val_round_preds: 0.2426 Epoch 185/200 20/20 [==============================] - 2s 112ms/step - loss: 0.3705 - round_preds: 0.6599 - val_loss: 2.3267 - val_round_preds: 0.2530 Epoch 186/200 20/20 [==============================] - 2s 112ms/step - loss: 0.3412 - round_preds: 0.6854 - val_loss: 1.7356 - val_round_preds: 0.2540 Epoch 187/200 20/20 [==============================] - 2s 113ms/step - loss: 0.4953 - round_preds: 0.7020 - val_loss: 1.7997 - val_round_preds: 0.2564 Epoch 188/200 20/20 [==============================] - 2s 115ms/step - loss: 0.4607 - round_preds: 0.6670 - val_loss: 1.9966 - val_round_preds: 0.2460 Epoch 189/200 20/20 [==============================] - 2s 97ms/step - loss: 0.4723 - round_preds: 0.6479 - val_loss: 2.0630 - val_round_preds: 0.2827 Epoch 190/200 20/20 [==============================] - 2s 112ms/step - loss: 0.4827 - round_preds: 0.6601 - val_loss: 1.9650 - val_round_preds: 0.2356 Epoch 191/200 20/20 [==============================] - 2s 112ms/step - loss: 0.2721 - round_preds: 0.7124 - val_loss: 1.8668 - val_round_preds: 0.2505 Epoch 192/200 20/20 [==============================] - 2s 100ms/step - loss: 0.1831 - round_preds: 0.7317 - val_loss: 1.9156 - val_round_preds: 0.2644 Epoch 193/200 20/20 [==============================] - 2s 96ms/step - loss: 0.1788 - round_preds: 0.7861 - val_loss: 2.2764 - val_round_preds: 0.2679 Epoch 194/200 20/20 [==============================] - 2s 112ms/step - loss: 0.2197 - round_preds: 0.7447 - val_loss: 1.8302 - val_round_preds: 0.2312 Epoch 195/200 20/20 [==============================] - 2s 97ms/step - loss: 0.1784 - round_preds: 0.7923 - val_loss: 2.0774 - val_round_preds: 0.2530 Epoch 196/200 20/20 [==============================] - 2s 113ms/step - loss: 0.2196 - round_preds: 0.8081 - val_loss: 1.7907 - val_round_preds: 0.2475 Epoch 197/200 20/20 [==============================] - 2s 117ms/step - loss: 0.1092 - round_preds: 0.8362 - val_loss: 1.9103 - val_round_preds: 0.2669 Epoch 198/200 20/20 [==============================] - 2s 112ms/step - loss: 0.1359 - round_preds: 0.8729 - val_loss: 1.9506 - val_round_preds: 0.2589 Epoch 199/200 20/20 [==============================] - 2s 112ms/step - loss: 0.1379 - round_preds: 0.8626 - val_loss: 1.8859 - val_round_preds: 0.2748 Epoch 200/200 20/20 [==============================] - 2s 112ms/step - loss: 0.1517 - round_preds: 0.8018 - val_loss: 1.8651 - val_round_preds: 0.2564
Total time is : 429.8443212509155
print("################## Training Including Validation ##################")
y_pred_vgg_train = model_Vgg.predict(resized_x_train_total)
min = 0
max = 4
y_pred_vgg_train = (y_pred_vgg_train - np.min(y_pred_vgg_train)) / (np.max(y_pred_vgg_train) - np.min(y_pred_vgg_train)) * (max - min) + min
y_pred_vgg_train = np.round_(y_pred_vgg_train)
cm1 = ConfusionMatrix(y_train, y_pred_vgg_train)
PLOT_ConfusionMatrix(cm1,"Training including validation set")
model_vgg_report = classification_report(y_train, y_pred_vgg_train)
print(model_vgg_report)
print("################## Test set ##################")
y_pred_vgg_test = model_Vgg.predict(resized_x_test)
min = 0
max = 4
y_pred_vgg_test = (y_pred_vgg_test - np.min(y_pred_vgg_test)) / (np.max(y_pred_vgg_test) - np.min(y_pred_vgg_test)) * (max - min) + min
y_pred_vgg_test = np.round_(y_pred_vgg_test)
cm1 = ConfusionMatrix(y_test, y_pred_vgg_test)
PLOT_ConfusionMatrix(cm1,"Test set")
model_vgg_report = classification_report(y_test, y_pred_vgg_test)
print(model_vgg_report)
################## Training Including Validation ##################
precision recall f1-score support
0 0.70 0.09 0.16 180
1 0.23 0.71 0.34 180
2 0.13 0.19 0.15 180
3 0.35 0.09 0.14 180
4 1.00 0.01 0.01 180
accuracy 0.22 900
macro avg 0.48 0.22 0.16 900
weighted avg 0.48 0.22 0.16 900
################## Test set ##################
precision recall f1-score support
0 0.21 0.15 0.18 20
1 0.20 0.55 0.29 20
2 0.17 0.20 0.19 20
3 0.17 0.05 0.08 20
4 0.00 0.00 0.00 20
accuracy 0.19 100
macro avg 0.15 0.19 0.15 100
weighted avg 0.15 0.19 0.15 100
Write your discussion here.
Regularization and data augmentation are common strategies to deal with small datasets.
Incorporate Batch Normalization and Dropout into your design the superior network trained in Part 1. You are not expected to fine-tune the initial VGG layers. Again you will provide the following:
Train the same model from Step 1, now using data augmentation. Again, please provide the same output metrics as in Step 1.
Discuss based on your learning curves and final metrics in Step 2, how large a improvement can be observed from regularization and data augmentation.
#Write your code here
######################################################### step(1)
import tensorflow
from tensorflow import keras
model_Vgg = keras.models.Sequential()
architecture = VGG16(include_top=False, input_shape=(128,128,3), weights='imagenet', classes=5, pooling="avg")
##################### Freez all layers
for layer in architecture.layers:
layer.trainable = False
for i in range(0,7):
model_Vgg.add(architecture.layers[i])
# Normalization
resized_x_train = resized_x_train/255
resized_x_val = resized_x_val/255
model_Vgg.add(Conv2D(128, (3, 3), activation="linear",padding='valid')) # layer 1
model_Vgg.add(Conv2D(16, (1, 1), activation=keras.layers.LeakyReLU(),padding='valid')) # 1x1 conv layer
model_Vgg.add(MaxPooling2D(pool_size=(2, 2),padding='valid')) # max pooling layer
model_Vgg.add(Flatten())
model_Vgg.add(Dense(1024, activation=keras.layers.LeakyReLU())) # layer 2
model_Vgg.add(BatchNormalization())
model_Vgg.add(Dropout(0.5))
model_Vgg.add(Dense(512, activation=keras.layers.LeakyReLU())) # layer 3
model_Vgg.add(BatchNormalization())
model_Vgg.add(Dropout(0.3))
model_Vgg.add(Dense(256, activation=keras.layers.LeakyReLU())) # layer 4
model_Vgg.add(BatchNormalization())
model_Vgg.add(Dropout(0.2))
model_Vgg.add(Dense(5, activation='softmax')) # layer 5
model_Vgg.compile(loss='sparse_categorical_crossentropy', optimizer="adam", metrics=['accuracy'])
print(model_Vgg.summary())
history_Vgg = model_Vgg.fit(resized_x_train, new_y_train, epochs=100, batch_size = 32, validation_data=(resized_x_val, y_val))
loss_training = history_Vgg.history['loss']
loss_test = history_Vgg.history['val_loss']
accuracy_training = history_Vgg.history['accuracy']
accuracy_test = history_Vgg.history['val_accuracy']
####################### Plotting
plt.plot(loss_test)
plt.plot(loss_training)
plt.xlabel("# Of Epochs")
plt.ylabel("Loss")
plt.legend(['val Loss', 'Train Loss'])
plt.show()
plt.plot(accuracy_test)
plt.plot(accuracy_training)
plt.xlabel("# Of Epochs")
plt.ylabel("Accuracy")
plt.legend(['val accuracy', 'Train accuracy'])
plt.show()
Model: "sequential_2" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= block1_conv1 (Conv2D) (None, 128, 128, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 128, 128, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 64, 64, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 64, 64, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 64, 64, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 32, 32, 128) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 30, 30, 128) 147584 _________________________________________________________________ conv2d_5 (Conv2D) (None, 30, 30, 16) 2064 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 15, 15, 16) 0 _________________________________________________________________ flatten_2 (Flatten) (None, 3600) 0 _________________________________________________________________ dense_8 (Dense) (None, 1024) 3687424 _________________________________________________________________ batch_normalization_6 (Batch (None, 1024) 4096 _________________________________________________________________ dropout_6 (Dropout) (None, 1024) 0 _________________________________________________________________ dense_9 (Dense) (None, 512) 524800 _________________________________________________________________ batch_normalization_7 (Batch (None, 512) 2048 _________________________________________________________________ dropout_7 (Dropout) (None, 512) 0 _________________________________________________________________ dense_10 (Dense) (None, 256) 131328 _________________________________________________________________ batch_normalization_8 (Batch (None, 256) 1024 _________________________________________________________________ dropout_8 (Dropout) (None, 256) 0 _________________________________________________________________ dense_11 (Dense) (None, 5) 1285 ================================================================= Total params: 4,761,813 Trainable params: 4,498,069 Non-trainable params: 263,744 _________________________________________________________________ None Epoch 1/100 20/20 [==============================] - 2s 38ms/step - loss: 2.2284 - accuracy: 0.2841 - val_loss: 40.5830 - val_accuracy: 0.2296 Epoch 2/100 20/20 [==============================] - 0s 23ms/step - loss: 1.2643 - accuracy: 0.5556 - val_loss: 28.8846 - val_accuracy: 0.1778 Epoch 3/100 20/20 [==============================] - 0s 21ms/step - loss: 0.8254 - accuracy: 0.6952 - val_loss: 9.3563 - val_accuracy: 0.1926 Epoch 4/100 20/20 [==============================] - 0s 21ms/step - loss: 0.4705 - accuracy: 0.8206 - val_loss: 11.3795 - val_accuracy: 0.1963 Epoch 5/100 20/20 [==============================] - 0s 21ms/step - loss: 0.3276 - accuracy: 0.8683 - val_loss: 6.4103 - val_accuracy: 0.2444 Epoch 6/100 20/20 [==============================] - 0s 22ms/step - loss: 0.2081 - accuracy: 0.9206 - val_loss: 16.0768 - val_accuracy: 0.1926 Epoch 7/100 20/20 [==============================] - 0s 24ms/step - loss: 0.1554 - accuracy: 0.9413 - val_loss: 6.5902 - val_accuracy: 0.3556 Epoch 8/100 20/20 [==============================] - 0s 24ms/step - loss: 0.1218 - accuracy: 0.9603 - val_loss: 7.8759 - val_accuracy: 0.3704 Epoch 9/100 20/20 [==============================] - 0s 22ms/step - loss: 0.1061 - accuracy: 0.9651 - val_loss: 7.0896 - val_accuracy: 0.2926 Epoch 10/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0544 - accuracy: 0.9810 - val_loss: 4.1138 - val_accuracy: 0.3630 Epoch 11/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0516 - accuracy: 0.9810 - val_loss: 4.1290 - val_accuracy: 0.3481 Epoch 12/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0646 - accuracy: 0.9794 - val_loss: 8.0470 - val_accuracy: 0.2000 Epoch 13/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0461 - accuracy: 0.9841 - val_loss: 6.6784 - val_accuracy: 0.2741 Epoch 14/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0654 - accuracy: 0.9778 - val_loss: 7.1543 - val_accuracy: 0.2815 Epoch 15/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0259 - accuracy: 0.9937 - val_loss: 6.0630 - val_accuracy: 0.2852 Epoch 16/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0483 - accuracy: 0.9841 - val_loss: 5.3294 - val_accuracy: 0.3519 Epoch 17/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0659 - accuracy: 0.9857 - val_loss: 4.3810 - val_accuracy: 0.3889 Epoch 18/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0570 - accuracy: 0.9810 - val_loss: 7.4636 - val_accuracy: 0.2444 Epoch 19/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0565 - accuracy: 0.9778 - val_loss: 7.4886 - val_accuracy: 0.2778 Epoch 20/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0602 - accuracy: 0.9794 - val_loss: 6.9568 - val_accuracy: 0.2963 Epoch 21/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0430 - accuracy: 0.9873 - val_loss: 4.9524 - val_accuracy: 0.3481 Epoch 22/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0408 - accuracy: 0.9857 - val_loss: 5.2838 - val_accuracy: 0.3630 Epoch 23/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0486 - accuracy: 0.9841 - val_loss: 5.2437 - val_accuracy: 0.3815 Epoch 24/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0629 - accuracy: 0.9810 - val_loss: 8.0131 - val_accuracy: 0.2815 Epoch 25/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0374 - accuracy: 0.9857 - val_loss: 7.3079 - val_accuracy: 0.2963 Epoch 26/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0712 - accuracy: 0.9730 - val_loss: 5.2050 - val_accuracy: 0.3037 Epoch 27/100 20/20 [==============================] - 0s 24ms/step - loss: 0.0612 - accuracy: 0.9778 - val_loss: 6.6028 - val_accuracy: 0.3741 Epoch 28/100 20/20 [==============================] - 0s 23ms/step - loss: 0.0317 - accuracy: 0.9921 - val_loss: 4.7907 - val_accuracy: 0.3926 Epoch 29/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0358 - accuracy: 0.9905 - val_loss: 7.5480 - val_accuracy: 0.2889 Epoch 30/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0295 - accuracy: 0.9921 - val_loss: 4.9105 - val_accuracy: 0.4000 Epoch 31/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0232 - accuracy: 0.9921 - val_loss: 5.1073 - val_accuracy: 0.3815 Epoch 32/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0389 - accuracy: 0.9857 - val_loss: 5.9401 - val_accuracy: 0.2667 Epoch 33/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0265 - accuracy: 0.9937 - val_loss: 5.3610 - val_accuracy: 0.3370 Epoch 34/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0125 - accuracy: 0.9952 - val_loss: 5.3961 - val_accuracy: 0.3778 Epoch 35/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0168 - accuracy: 0.9937 - val_loss: 7.2661 - val_accuracy: 0.3333 Epoch 36/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0102 - accuracy: 0.9968 - val_loss: 5.5461 - val_accuracy: 0.4074 Epoch 37/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0278 - accuracy: 0.9889 - val_loss: 7.1280 - val_accuracy: 0.2926 Epoch 38/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0275 - accuracy: 0.9921 - val_loss: 7.5145 - val_accuracy: 0.3259 Epoch 39/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0470 - accuracy: 0.9841 - val_loss: 6.8904 - val_accuracy: 0.3296 Epoch 40/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0319 - accuracy: 0.9873 - val_loss: 4.5436 - val_accuracy: 0.3667 Epoch 41/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0599 - accuracy: 0.9825 - val_loss: 4.3783 - val_accuracy: 0.3704 Epoch 42/100 20/20 [==============================] - 0s 20ms/step - loss: 0.0500 - accuracy: 0.9778 - val_loss: 4.3636 - val_accuracy: 0.3741 Epoch 43/100 20/20 [==============================] - 0s 20ms/step - loss: 0.0703 - accuracy: 0.9825 - val_loss: 6.3750 - val_accuracy: 0.3815 Epoch 44/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0258 - accuracy: 0.9937 - val_loss: 5.1024 - val_accuracy: 0.3556 Epoch 45/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0680 - accuracy: 0.9762 - val_loss: 7.7103 - val_accuracy: 0.2444 Epoch 46/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0556 - accuracy: 0.9794 - val_loss: 8.5517 - val_accuracy: 0.2407 Epoch 47/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0153 - accuracy: 0.9968 - val_loss: 7.4644 - val_accuracy: 0.2593 Epoch 48/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0243 - accuracy: 0.9905 - val_loss: 6.7753 - val_accuracy: 0.2667 Epoch 49/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0184 - accuracy: 0.9921 - val_loss: 7.4000 - val_accuracy: 0.2630 Epoch 50/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0322 - accuracy: 0.9873 - val_loss: 10.3973 - val_accuracy: 0.2333 Epoch 51/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0369 - accuracy: 0.9905 - val_loss: 7.4504 - val_accuracy: 0.2926 Epoch 52/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0248 - accuracy: 0.9937 - val_loss: 5.3562 - val_accuracy: 0.3296 Epoch 53/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0250 - accuracy: 0.9937 - val_loss: 5.5693 - val_accuracy: 0.4037 Epoch 54/100 20/20 [==============================] - 0s 24ms/step - loss: 0.0155 - accuracy: 0.9952 - val_loss: 5.5950 - val_accuracy: 0.3889 Epoch 55/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0160 - accuracy: 0.9937 - val_loss: 5.8918 - val_accuracy: 0.4074 Epoch 56/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0167 - accuracy: 0.9921 - val_loss: 7.0301 - val_accuracy: 0.3889 Epoch 57/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0208 - accuracy: 0.9905 - val_loss: 5.8855 - val_accuracy: 0.3630 Epoch 58/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0083 - accuracy: 1.0000 - val_loss: 6.6449 - val_accuracy: 0.3556 Epoch 59/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0094 - accuracy: 0.9968 - val_loss: 6.0849 - val_accuracy: 0.3407 Epoch 60/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0112 - accuracy: 0.9968 - val_loss: 4.9292 - val_accuracy: 0.3852 Epoch 61/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0026 - accuracy: 1.0000 - val_loss: 5.3245 - val_accuracy: 0.3519 Epoch 62/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0074 - accuracy: 0.9984 - val_loss: 5.8257 - val_accuracy: 0.3222 Epoch 63/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0243 - accuracy: 0.9905 - val_loss: 6.5703 - val_accuracy: 0.3296 Epoch 64/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0208 - accuracy: 0.9921 - val_loss: 8.8464 - val_accuracy: 0.2852 Epoch 65/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0235 - accuracy: 0.9905 - val_loss: 11.0455 - val_accuracy: 0.2481 Epoch 66/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0121 - accuracy: 0.9968 - val_loss: 4.4848 - val_accuracy: 0.3667 Epoch 67/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0260 - accuracy: 0.9937 - val_loss: 6.5816 - val_accuracy: 0.3630 Epoch 68/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0083 - accuracy: 0.9968 - val_loss: 5.8756 - val_accuracy: 0.3741 Epoch 69/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0102 - accuracy: 0.9968 - val_loss: 5.4980 - val_accuracy: 0.3630 Epoch 70/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0108 - accuracy: 0.9937 - val_loss: 7.7942 - val_accuracy: 0.3148 Epoch 71/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0111 - accuracy: 0.9968 - val_loss: 5.3697 - val_accuracy: 0.3333 Epoch 72/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0184 - accuracy: 0.9857 - val_loss: 5.0272 - val_accuracy: 0.3630 Epoch 73/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0129 - accuracy: 0.9952 - val_loss: 6.2113 - val_accuracy: 0.3444 Epoch 74/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0062 - accuracy: 1.0000 - val_loss: 5.6018 - val_accuracy: 0.3741 Epoch 75/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0105 - accuracy: 0.9968 - val_loss: 4.9703 - val_accuracy: 0.3667 Epoch 76/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0043 - accuracy: 0.9984 - val_loss: 5.6683 - val_accuracy: 0.3296 Epoch 77/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0048 - accuracy: 0.9984 - val_loss: 4.5335 - val_accuracy: 0.3630 Epoch 78/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 4.5361 - val_accuracy: 0.3667 Epoch 79/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0039 - accuracy: 1.0000 - val_loss: 5.1593 - val_accuracy: 0.3333 Epoch 80/100 20/20 [==============================] - 0s 24ms/step - loss: 0.0021 - accuracy: 1.0000 - val_loss: 5.5041 - val_accuracy: 0.3444 Epoch 81/100 20/20 [==============================] - 0s 22ms/step - loss: 0.0019 - accuracy: 1.0000 - val_loss: 5.6618 - val_accuracy: 0.3333 Epoch 82/100 20/20 [==============================] - 0s 23ms/step - loss: 0.0088 - accuracy: 0.9968 - val_loss: 5.3755 - val_accuracy: 0.3519 Epoch 83/100 20/20 [==============================] - 0s 25ms/step - loss: 0.0011 - accuracy: 1.0000 - val_loss: 5.3341 - val_accuracy: 0.3407 Epoch 84/100 20/20 [==============================] - 0s 24ms/step - loss: 0.0090 - accuracy: 0.9984 - val_loss: 6.7960 - val_accuracy: 0.2704 Epoch 85/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0057 - accuracy: 0.9984 - val_loss: 5.5314 - val_accuracy: 0.3148 Epoch 86/100 20/20 [==============================] - 0s 20ms/step - loss: 0.0034 - accuracy: 1.0000 - val_loss: 6.8586 - val_accuracy: 0.2963 Epoch 87/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0022 - accuracy: 0.9984 - val_loss: 5.8978 - val_accuracy: 0.2963 Epoch 88/100 20/20 [==============================] - 0s 20ms/step - loss: 0.0104 - accuracy: 0.9968 - val_loss: 8.4106 - val_accuracy: 0.2593 Epoch 89/100 20/20 [==============================] - 0s 20ms/step - loss: 0.0052 - accuracy: 0.9984 - val_loss: 4.8101 - val_accuracy: 0.3593 Epoch 90/100 20/20 [==============================] - 0s 20ms/step - loss: 0.0176 - accuracy: 0.9952 - val_loss: 5.3337 - val_accuracy: 0.3815 Epoch 91/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0085 - accuracy: 1.0000 - val_loss: 7.1365 - val_accuracy: 0.3926 Epoch 92/100 20/20 [==============================] - 0s 20ms/step - loss: 0.0041 - accuracy: 1.0000 - val_loss: 4.8866 - val_accuracy: 0.4333 Epoch 93/100 20/20 [==============================] - 0s 20ms/step - loss: 0.0027 - accuracy: 1.0000 - val_loss: 4.1769 - val_accuracy: 0.4185 Epoch 94/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0073 - accuracy: 0.9984 - val_loss: 4.1483 - val_accuracy: 0.4185 Epoch 95/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0115 - accuracy: 0.9968 - val_loss: 5.9212 - val_accuracy: 0.3852 Epoch 96/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0039 - accuracy: 0.9984 - val_loss: 5.6322 - val_accuracy: 0.3926 Epoch 97/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0044 - accuracy: 0.9984 - val_loss: 5.4172 - val_accuracy: 0.3296 Epoch 98/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0222 - accuracy: 0.9937 - val_loss: 7.6943 - val_accuracy: 0.2815 Epoch 99/100 20/20 [==============================] - 0s 21ms/step - loss: 0.0178 - accuracy: 0.9952 - val_loss: 8.0587 - val_accuracy: 0.2593 Epoch 100/100 20/20 [==============================] - 0s 20ms/step - loss: 0.0243 - accuracy: 0.9921 - val_loss: 9.3389 - val_accuracy: 0.2519
########################################## step(1)
print("################## Training Including Validation ##################")
y_pred_vgg_train = np.argmax(model_Vgg.predict(resized_x_train_total), axis=1)
cm1 = ConfusionMatrix(y_train, y_pred_vgg_train)
PLOT_ConfusionMatrix(cm1,"Training including validation set")
model_vgg_report = classification_report(y_train, y_pred_vgg_train)
print(model_vgg_report)
print("################## Test set ##################")
y_pred_vgg_test = np.argmax(model_Vgg.predict(resized_x_test), axis=1)
cm1 = ConfusionMatrix(y_test, y_pred_vgg_test)
PLOT_ConfusionMatrix(cm1,"Test set")
model_vgg_report = classification_report(y_test, y_pred_vgg_test)
print(model_vgg_report)
################## Training Including Validation ##################
precision recall f1-score support
0 0.90 0.48 0.63 180
1 0.88 0.62 0.73 180
2 1.00 0.04 0.09 180
3 0.33 0.97 0.49 180
4 0.81 0.62 0.70 180
accuracy 0.55 900
macro avg 0.78 0.55 0.53 900
weighted avg 0.78 0.55 0.53 900
################## Test set ##################
precision recall f1-score support
0 0.33 0.05 0.09 20
1 0.33 0.05 0.09 20
2 0.00 0.00 0.00 20
3 0.20 0.85 0.32 20
4 0.33 0.15 0.21 20
accuracy 0.22 100
macro avg 0.24 0.22 0.14 100
weighted avg 0.24 0.22 0.14 100
/opt/conda/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1318: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) /opt/conda/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1318: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) /opt/conda/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1318: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result))
#Write your code here
######################################################### step(2)
######################################################### Data Augmentation
Img_generator = ImageDataGenerator(rotation_range=90,shear_range=0.5, zoom_range=0.2, horizontal_flip=True)
print(resized_x_train.shape)
print(new_y_train.shape)
for i in range(5):
iterator = Img_generator.flow(resized_x_train, new_y_train, batch_size=256)
new_imgs_train = next(iterator)[0].astype("uint8")
new_labels_train = next(iterator)[1].astype("uint8")
print(new_imgs_train.shape)
print(new_labels_train.shape)
print("******************* Augmented images *******************")
for i in range(2):
plt.imshow(new_imgs_train[i])
plt.show()
resized_x_train = np.append(resized_x_train, new_imgs_train, axis = 0)
new_y_train = np.append(new_y_train, new_labels_train, axis = 0)
print(resized_x_train.shape)
print(new_y_train.shape)
(630, 128, 128, 3) (630,) (256, 128, 128, 3) (256,) ******************* Augmented images *******************
(886, 128, 128, 3) (886,) (256, 128, 128, 3) (256,) ******************* Augmented images *******************
(1142, 128, 128, 3) (1142,) (256, 128, 128, 3) (256,) ******************* Augmented images *******************
(1398, 128, 128, 3) (1398,) (256, 128, 128, 3) (256,) ******************* Augmented images *******************
(1654, 128, 128, 3) (1654,) (256, 128, 128, 3) (256,) ******************* Augmented images *******************
(1910, 128, 128, 3) (1910,)
resized_x_train.shape
(1910, 128, 128, 3)
new_y_train.shape
(1910,)
#Write your code here
######################################################### step(2)
import tensorflow
from tensorflow import keras
model_Vgg = keras.models.Sequential()
architecture = VGG16(include_top=False, input_shape=(128,128,3), weights='imagenet', classes=5, pooling="avg")
##################### Freez all layers
for layer in architecture.layers:
layer.trainable = False
for i in range(0,7):
model_Vgg.add(architecture.layers[i])
# Normalization
resized_x_train = resized_x_train/255
resized_x_val = resized_x_val/255
model_Vgg.add(Conv2D(128, (3, 3), activation="linear",padding='valid')) # layer 1
model_Vgg.add(Conv2D(16, (1, 1), activation=keras.layers.LeakyReLU(),padding='valid')) # 1x1 conv layer
model_Vgg.add(MaxPooling2D(pool_size=(2, 2),padding='valid')) # max pooling layer
model_Vgg.add(Flatten())
model_Vgg.add(Dense(1024, activation=keras.layers.LeakyReLU())) # layer 2
model_Vgg.add(BatchNormalization())
model_Vgg.add(Dropout(0.5))
model_Vgg.add(Dense(512, activation=keras.layers.LeakyReLU())) # layer 3
model_Vgg.add(BatchNormalization())
model_Vgg.add(Dropout(0.3))
model_Vgg.add(Dense(256, activation=keras.layers.LeakyReLU())) # layer 4
model_Vgg.add(BatchNormalization())
model_Vgg.add(Dropout(0.2))
model_Vgg.add(Dense(5, activation='softmax')) # layer 5
model_Vgg.compile(loss='sparse_categorical_crossentropy', optimizer="adam", metrics=['accuracy'])
print(model_Vgg.summary())
history_Vgg = model_Vgg.fit(resized_x_train, new_y_train, epochs=200, batch_size = 32, validation_data=(resized_x_val, y_val))
loss_training = history_Vgg.history['loss']
loss_test = history_Vgg.history['val_loss']
accuracy_training = history_Vgg.history['accuracy']
accuracy_test = history_Vgg.history['val_accuracy']
####################### Plotting
plt.plot(loss_test)
plt.plot(loss_training)
plt.xlabel("# Of Epochs")
plt.ylabel("Loss")
plt.legend(['val Loss', 'Train Loss'])
plt.show()
plt.plot(accuracy_test)
plt.plot(accuracy_training)
plt.xlabel("# Of Epochs")
plt.ylabel("Accuracy")
plt.legend(['val accuracy', 'Train accuracy'])
plt.show()
Model: "sequential_3" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= block1_conv1 (Conv2D) (None, 128, 128, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 128, 128, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 64, 64, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 64, 64, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 64, 64, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 32, 32, 128) 0 _________________________________________________________________ conv2d_6 (Conv2D) (None, 30, 30, 128) 147584 _________________________________________________________________ conv2d_7 (Conv2D) (None, 30, 30, 16) 2064 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 15, 15, 16) 0 _________________________________________________________________ flatten_3 (Flatten) (None, 3600) 0 _________________________________________________________________ dense_12 (Dense) (None, 1024) 3687424 _________________________________________________________________ batch_normalization_9 (Batch (None, 1024) 4096 _________________________________________________________________ dropout_9 (Dropout) (None, 1024) 0 _________________________________________________________________ dense_13 (Dense) (None, 512) 524800 _________________________________________________________________ batch_normalization_10 (Batc (None, 512) 2048 _________________________________________________________________ dropout_10 (Dropout) (None, 512) 0 _________________________________________________________________ dense_14 (Dense) (None, 256) 131328 _________________________________________________________________ batch_normalization_11 (Batc (None, 256) 1024 _________________________________________________________________ dropout_11 (Dropout) (None, 256) 0 _________________________________________________________________ dense_15 (Dense) (None, 5) 1285 ================================================================= Total params: 4,761,813 Trainable params: 4,498,069 Non-trainable params: 263,744 _________________________________________________________________ None Epoch 1/200 60/60 [==============================] - 2s 21ms/step - loss: 2.2101 - accuracy: 0.1979 - val_loss: 6.2772 - val_accuracy: 0.1815 Epoch 2/200 60/60 [==============================] - 1s 18ms/step - loss: 1.7612 - accuracy: 0.2712 - val_loss: 1.9203 - val_accuracy: 0.2296 Epoch 3/200 60/60 [==============================] - 1s 17ms/step - loss: 1.6168 - accuracy: 0.3518 - val_loss: 2.3519 - val_accuracy: 0.2667 Epoch 4/200 60/60 [==============================] - 1s 17ms/step - loss: 1.4244 - accuracy: 0.4319 - val_loss: 2.6540 - val_accuracy: 0.2370 Epoch 5/200 60/60 [==============================] - 1s 17ms/step - loss: 1.1977 - accuracy: 0.5309 - val_loss: 2.5905 - val_accuracy: 0.2222 Epoch 6/200 60/60 [==============================] - 1s 17ms/step - loss: 1.0144 - accuracy: 0.6241 - val_loss: 3.2199 - val_accuracy: 0.2259 Epoch 7/200 60/60 [==============================] - 1s 17ms/step - loss: 0.7326 - accuracy: 0.7283 - val_loss: 4.1403 - val_accuracy: 0.2148 Epoch 8/200 60/60 [==============================] - 1s 17ms/step - loss: 0.6143 - accuracy: 0.7911 - val_loss: 4.4815 - val_accuracy: 0.2593 Epoch 9/200 60/60 [==============================] - 1s 17ms/step - loss: 0.4060 - accuracy: 0.8455 - val_loss: 4.4723 - val_accuracy: 0.2556 Epoch 10/200 60/60 [==============================] - 1s 17ms/step - loss: 0.3128 - accuracy: 0.8932 - val_loss: 5.9030 - val_accuracy: 0.2407 Epoch 11/200 60/60 [==============================] - 1s 17ms/step - loss: 0.2914 - accuracy: 0.8984 - val_loss: 6.1781 - val_accuracy: 0.1852 Epoch 12/200 60/60 [==============================] - 1s 17ms/step - loss: 0.2362 - accuracy: 0.9204 - val_loss: 4.4077 - val_accuracy: 0.2778 Epoch 13/200 60/60 [==============================] - 1s 18ms/step - loss: 0.1708 - accuracy: 0.9419 - val_loss: 4.6004 - val_accuracy: 0.2407 Epoch 14/200 60/60 [==============================] - 1s 17ms/step - loss: 0.1568 - accuracy: 0.9508 - val_loss: 6.6262 - val_accuracy: 0.1963 Epoch 15/200 60/60 [==============================] - 1s 17ms/step - loss: 0.1442 - accuracy: 0.9492 - val_loss: 4.7430 - val_accuracy: 0.2370 Epoch 16/200 60/60 [==============================] - 1s 17ms/step - loss: 0.1379 - accuracy: 0.9529 - val_loss: 5.6622 - val_accuracy: 0.2259 Epoch 17/200 60/60 [==============================] - 1s 17ms/step - loss: 0.1104 - accuracy: 0.9613 - val_loss: 7.5347 - val_accuracy: 0.2148 Epoch 18/200 60/60 [==============================] - 1s 19ms/step - loss: 0.1411 - accuracy: 0.9518 - val_loss: 5.6369 - val_accuracy: 0.2704 Epoch 19/200 60/60 [==============================] - 1s 17ms/step - loss: 0.1358 - accuracy: 0.9618 - val_loss: 5.3606 - val_accuracy: 0.2407 Epoch 20/200 60/60 [==============================] - 1s 17ms/step - loss: 0.1057 - accuracy: 0.9654 - val_loss: 5.2423 - val_accuracy: 0.2444 Epoch 21/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0702 - accuracy: 0.9743 - val_loss: 5.3013 - val_accuracy: 0.2593 Epoch 22/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0670 - accuracy: 0.9770 - val_loss: 5.3185 - val_accuracy: 0.2481 Epoch 23/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0786 - accuracy: 0.9707 - val_loss: 5.4761 - val_accuracy: 0.2593 Epoch 24/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0878 - accuracy: 0.9686 - val_loss: 4.6677 - val_accuracy: 0.2889 Epoch 25/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0774 - accuracy: 0.9728 - val_loss: 7.7500 - val_accuracy: 0.2148 Epoch 26/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0620 - accuracy: 0.9832 - val_loss: 5.7489 - val_accuracy: 0.2778 Epoch 27/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0559 - accuracy: 0.9817 - val_loss: 5.2382 - val_accuracy: 0.2667 Epoch 28/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0596 - accuracy: 0.9770 - val_loss: 6.9459 - val_accuracy: 0.2296 Epoch 29/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0753 - accuracy: 0.9728 - val_loss: 5.7158 - val_accuracy: 0.2556 Epoch 30/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0770 - accuracy: 0.9696 - val_loss: 5.8387 - val_accuracy: 0.2630 Epoch 31/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0447 - accuracy: 0.9869 - val_loss: 6.1372 - val_accuracy: 0.2704 Epoch 32/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0495 - accuracy: 0.9822 - val_loss: 6.3612 - val_accuracy: 0.2556 Epoch 33/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0496 - accuracy: 0.9822 - val_loss: 6.4487 - val_accuracy: 0.2333 Epoch 34/200 60/60 [==============================] - 1s 19ms/step - loss: 0.0690 - accuracy: 0.9759 - val_loss: 7.1743 - val_accuracy: 0.2519 Epoch 35/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0706 - accuracy: 0.9770 - val_loss: 7.6905 - val_accuracy: 0.2444 Epoch 36/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0710 - accuracy: 0.9728 - val_loss: 5.8946 - val_accuracy: 0.2407 Epoch 37/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0734 - accuracy: 0.9712 - val_loss: 7.1670 - val_accuracy: 0.2519 Epoch 38/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0685 - accuracy: 0.9775 - val_loss: 6.2173 - val_accuracy: 0.2259 Epoch 39/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0929 - accuracy: 0.9696 - val_loss: 7.2109 - val_accuracy: 0.2111 Epoch 40/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0688 - accuracy: 0.9743 - val_loss: 5.6220 - val_accuracy: 0.2519 Epoch 41/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0757 - accuracy: 0.9743 - val_loss: 6.3384 - val_accuracy: 0.2185 Epoch 42/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0908 - accuracy: 0.9707 - val_loss: 8.0101 - val_accuracy: 0.2481 Epoch 43/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0795 - accuracy: 0.9749 - val_loss: 6.3070 - val_accuracy: 0.2185 Epoch 44/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0983 - accuracy: 0.9675 - val_loss: 6.3252 - val_accuracy: 0.2556 Epoch 45/200 60/60 [==============================] - 1s 19ms/step - loss: 0.0708 - accuracy: 0.9780 - val_loss: 7.7134 - val_accuracy: 0.2074 Epoch 46/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0661 - accuracy: 0.9759 - val_loss: 6.6387 - val_accuracy: 0.2407 Epoch 47/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0276 - accuracy: 0.9906 - val_loss: 6.8767 - val_accuracy: 0.2037 Epoch 48/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0283 - accuracy: 0.9916 - val_loss: 6.4249 - val_accuracy: 0.2148 Epoch 49/200 60/60 [==============================] - 1s 20ms/step - loss: 0.0303 - accuracy: 0.9880 - val_loss: 6.7696 - val_accuracy: 0.2630 Epoch 50/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0435 - accuracy: 0.9832 - val_loss: 7.1280 - val_accuracy: 0.2407 Epoch 51/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0436 - accuracy: 0.9853 - val_loss: 6.9707 - val_accuracy: 0.2185 Epoch 52/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0441 - accuracy: 0.9838 - val_loss: 6.4699 - val_accuracy: 0.2667 Epoch 53/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0372 - accuracy: 0.9890 - val_loss: 8.3817 - val_accuracy: 0.2296 Epoch 54/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0597 - accuracy: 0.9806 - val_loss: 7.4528 - val_accuracy: 0.2148 Epoch 55/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0389 - accuracy: 0.9832 - val_loss: 6.6204 - val_accuracy: 0.2556 Epoch 56/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0421 - accuracy: 0.9853 - val_loss: 6.4634 - val_accuracy: 0.2333 Epoch 57/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0354 - accuracy: 0.9874 - val_loss: 6.4179 - val_accuracy: 0.2259 Epoch 58/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0509 - accuracy: 0.9838 - val_loss: 10.6340 - val_accuracy: 0.2111 Epoch 59/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0389 - accuracy: 0.9911 - val_loss: 7.5421 - val_accuracy: 0.2259 Epoch 60/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0327 - accuracy: 0.9874 - val_loss: 6.7639 - val_accuracy: 0.2296 Epoch 61/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0378 - accuracy: 0.9885 - val_loss: 7.8741 - val_accuracy: 0.2519 Epoch 62/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0322 - accuracy: 0.9911 - val_loss: 7.8198 - val_accuracy: 0.2519 Epoch 63/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0478 - accuracy: 0.9838 - val_loss: 7.1535 - val_accuracy: 0.2407 Epoch 64/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0492 - accuracy: 0.9853 - val_loss: 7.2093 - val_accuracy: 0.2259 Epoch 65/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0473 - accuracy: 0.9838 - val_loss: 8.2487 - val_accuracy: 0.2259 Epoch 66/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0501 - accuracy: 0.9801 - val_loss: 7.4859 - val_accuracy: 0.2111 Epoch 67/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0474 - accuracy: 0.9848 - val_loss: 9.4238 - val_accuracy: 0.2407 Epoch 68/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0393 - accuracy: 0.9859 - val_loss: 8.6494 - val_accuracy: 0.2000 Epoch 69/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0377 - accuracy: 0.9890 - val_loss: 8.7062 - val_accuracy: 0.2259 Epoch 70/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0299 - accuracy: 0.9869 - val_loss: 7.3490 - val_accuracy: 0.2667 Epoch 71/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0465 - accuracy: 0.9843 - val_loss: 7.5515 - val_accuracy: 0.2370 Epoch 72/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0715 - accuracy: 0.9754 - val_loss: 6.8154 - val_accuracy: 0.2815 Epoch 73/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0561 - accuracy: 0.9843 - val_loss: 8.6321 - val_accuracy: 0.2556 Epoch 74/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0659 - accuracy: 0.9759 - val_loss: 8.5789 - val_accuracy: 0.2185 Epoch 75/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0657 - accuracy: 0.9754 - val_loss: 7.1185 - val_accuracy: 0.2296 Epoch 76/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0376 - accuracy: 0.9901 - val_loss: 6.7611 - val_accuracy: 0.2481 Epoch 77/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0450 - accuracy: 0.9817 - val_loss: 8.7154 - val_accuracy: 0.2407 Epoch 78/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0347 - accuracy: 0.9895 - val_loss: 7.6835 - val_accuracy: 0.2407 Epoch 79/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0329 - accuracy: 0.9880 - val_loss: 6.2038 - val_accuracy: 0.2630 Epoch 80/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0380 - accuracy: 0.9874 - val_loss: 6.9452 - val_accuracy: 0.2222 Epoch 81/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0316 - accuracy: 0.9880 - val_loss: 7.2302 - val_accuracy: 0.2630 Epoch 82/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0431 - accuracy: 0.9853 - val_loss: 8.4063 - val_accuracy: 0.2556 Epoch 83/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0433 - accuracy: 0.9848 - val_loss: 8.1463 - val_accuracy: 0.2296 Epoch 84/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0586 - accuracy: 0.9853 - val_loss: 7.6837 - val_accuracy: 0.2593 Epoch 85/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0487 - accuracy: 0.9838 - val_loss: 8.2108 - val_accuracy: 0.2259 Epoch 86/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0482 - accuracy: 0.9822 - val_loss: 7.6138 - val_accuracy: 0.2444 Epoch 87/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0544 - accuracy: 0.9806 - val_loss: 8.4138 - val_accuracy: 0.2407 Epoch 88/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0530 - accuracy: 0.9822 - val_loss: 8.6948 - val_accuracy: 0.2481 Epoch 89/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0386 - accuracy: 0.9869 - val_loss: 7.8630 - val_accuracy: 0.2296 Epoch 90/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0573 - accuracy: 0.9848 - val_loss: 7.2753 - val_accuracy: 0.2630 Epoch 91/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0410 - accuracy: 0.9864 - val_loss: 7.7228 - val_accuracy: 0.2556 Epoch 92/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0226 - accuracy: 0.9921 - val_loss: 8.1615 - val_accuracy: 0.2222 Epoch 93/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0255 - accuracy: 0.9921 - val_loss: 7.4532 - val_accuracy: 0.2444 Epoch 94/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0333 - accuracy: 0.9890 - val_loss: 9.7386 - val_accuracy: 0.2074 Epoch 95/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0264 - accuracy: 0.9911 - val_loss: 8.0423 - val_accuracy: 0.2556 Epoch 96/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0274 - accuracy: 0.9906 - val_loss: 6.7008 - val_accuracy: 0.2778 Epoch 97/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0298 - accuracy: 0.9911 - val_loss: 6.8000 - val_accuracy: 0.3000 Epoch 98/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0267 - accuracy: 0.9901 - val_loss: 7.5623 - val_accuracy: 0.2296 Epoch 99/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0296 - accuracy: 0.9911 - val_loss: 8.2007 - val_accuracy: 0.2259 Epoch 100/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0319 - accuracy: 0.9911 - val_loss: 8.1251 - val_accuracy: 0.2704 Epoch 101/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0313 - accuracy: 0.9895 - val_loss: 8.1425 - val_accuracy: 0.2481 Epoch 102/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0330 - accuracy: 0.9885 - val_loss: 6.3432 - val_accuracy: 0.2778 Epoch 103/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0340 - accuracy: 0.9890 - val_loss: 7.5230 - val_accuracy: 0.2519 Epoch 104/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0349 - accuracy: 0.9895 - val_loss: 8.0436 - val_accuracy: 0.2444 Epoch 105/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0325 - accuracy: 0.9885 - val_loss: 9.0001 - val_accuracy: 0.2111 Epoch 106/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0356 - accuracy: 0.9869 - val_loss: 8.3128 - val_accuracy: 0.2407 Epoch 107/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0351 - accuracy: 0.9890 - val_loss: 8.8736 - val_accuracy: 0.2704 Epoch 108/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0339 - accuracy: 0.9874 - val_loss: 9.4140 - val_accuracy: 0.2222 Epoch 109/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0205 - accuracy: 0.9916 - val_loss: 8.3651 - val_accuracy: 0.2556 Epoch 110/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0446 - accuracy: 0.9874 - val_loss: 8.3491 - val_accuracy: 0.2407 Epoch 111/200 60/60 [==============================] - 1s 19ms/step - loss: 0.0246 - accuracy: 0.9895 - val_loss: 6.9337 - val_accuracy: 0.2778 Epoch 112/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0259 - accuracy: 0.9937 - val_loss: 8.4453 - val_accuracy: 0.2370 Epoch 113/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0242 - accuracy: 0.9927 - val_loss: 8.1538 - val_accuracy: 0.2296 Epoch 114/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0226 - accuracy: 0.9906 - val_loss: 8.7772 - val_accuracy: 0.2259 Epoch 115/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0277 - accuracy: 0.9911 - val_loss: 8.4753 - val_accuracy: 0.2333 Epoch 116/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0352 - accuracy: 0.9864 - val_loss: 9.0175 - val_accuracy: 0.2000 Epoch 117/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0384 - accuracy: 0.9885 - val_loss: 7.3821 - val_accuracy: 0.2556 Epoch 118/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0280 - accuracy: 0.9885 - val_loss: 7.3569 - val_accuracy: 0.2704 Epoch 119/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0424 - accuracy: 0.9853 - val_loss: 9.5187 - val_accuracy: 0.2370 Epoch 120/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0378 - accuracy: 0.9869 - val_loss: 7.3329 - val_accuracy: 0.2630 Epoch 121/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0319 - accuracy: 0.9880 - val_loss: 7.5575 - val_accuracy: 0.2444 Epoch 122/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0329 - accuracy: 0.9911 - val_loss: 8.7675 - val_accuracy: 0.2407 Epoch 123/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0285 - accuracy: 0.9911 - val_loss: 8.4096 - val_accuracy: 0.2556 Epoch 124/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0462 - accuracy: 0.9864 - val_loss: 7.8482 - val_accuracy: 0.2519 Epoch 125/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0293 - accuracy: 0.9911 - val_loss: 9.5197 - val_accuracy: 0.1963 Epoch 126/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0172 - accuracy: 0.9953 - val_loss: 10.6148 - val_accuracy: 0.2074 Epoch 127/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0073 - accuracy: 0.9990 - val_loss: 8.2532 - val_accuracy: 0.2111 Epoch 128/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0117 - accuracy: 0.9963 - val_loss: 7.6698 - val_accuracy: 0.2630 Epoch 129/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0132 - accuracy: 0.9969 - val_loss: 8.5472 - val_accuracy: 0.2630 Epoch 130/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0143 - accuracy: 0.9953 - val_loss: 8.2025 - val_accuracy: 0.2556 Epoch 131/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0215 - accuracy: 0.9927 - val_loss: 8.1024 - val_accuracy: 0.2630 Epoch 132/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0382 - accuracy: 0.9859 - val_loss: 8.2348 - val_accuracy: 0.1963 Epoch 133/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0281 - accuracy: 0.9906 - val_loss: 8.6391 - val_accuracy: 0.2259 Epoch 134/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0351 - accuracy: 0.9885 - val_loss: 11.7351 - val_accuracy: 0.2037 Epoch 135/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0360 - accuracy: 0.9880 - val_loss: 7.5591 - val_accuracy: 0.2370 Epoch 136/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0260 - accuracy: 0.9921 - val_loss: 8.9332 - val_accuracy: 0.2074 Epoch 137/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0302 - accuracy: 0.9901 - val_loss: 8.7114 - val_accuracy: 0.2630 Epoch 138/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0385 - accuracy: 0.9880 - val_loss: 8.3096 - val_accuracy: 0.2926 Epoch 139/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0555 - accuracy: 0.9827 - val_loss: 7.9319 - val_accuracy: 0.2444 Epoch 140/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0441 - accuracy: 0.9848 - val_loss: 7.1572 - val_accuracy: 0.2815 Epoch 141/200 60/60 [==============================] - 1s 19ms/step - loss: 0.0499 - accuracy: 0.9832 - val_loss: 9.3357 - val_accuracy: 0.2333 Epoch 142/200 60/60 [==============================] - 1s 20ms/step - loss: 0.0487 - accuracy: 0.9843 - val_loss: 7.6989 - val_accuracy: 0.2370 Epoch 143/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0411 - accuracy: 0.9859 - val_loss: 8.0506 - val_accuracy: 0.2444 Epoch 144/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0288 - accuracy: 0.9885 - val_loss: 8.6897 - val_accuracy: 0.2481 Epoch 145/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0261 - accuracy: 0.9895 - val_loss: 9.8264 - val_accuracy: 0.2185 Epoch 146/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0107 - accuracy: 0.9969 - val_loss: 7.8386 - val_accuracy: 0.2296 Epoch 147/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0060 - accuracy: 0.9990 - val_loss: 8.0632 - val_accuracy: 0.2370 Epoch 148/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0062 - accuracy: 0.9979 - val_loss: 8.2952 - val_accuracy: 0.2333 Epoch 149/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0062 - accuracy: 0.9995 - val_loss: 8.0713 - val_accuracy: 0.2296 Epoch 150/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0060 - accuracy: 0.9984 - val_loss: 7.3402 - val_accuracy: 0.2444 Epoch 151/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0042 - accuracy: 0.9990 - val_loss: 7.9853 - val_accuracy: 0.2407 Epoch 152/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0103 - accuracy: 0.9963 - val_loss: 8.8304 - val_accuracy: 0.2407 Epoch 153/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0149 - accuracy: 0.9969 - val_loss: 7.7746 - val_accuracy: 0.2593 Epoch 154/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0154 - accuracy: 0.9942 - val_loss: 9.5333 - val_accuracy: 0.2407 Epoch 155/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0153 - accuracy: 0.9948 - val_loss: 8.9581 - val_accuracy: 0.2296 Epoch 156/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0126 - accuracy: 0.9963 - val_loss: 9.7606 - val_accuracy: 0.2259 Epoch 157/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0153 - accuracy: 0.9969 - val_loss: 8.9245 - val_accuracy: 0.2593 Epoch 158/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0251 - accuracy: 0.9906 - val_loss: 8.7648 - val_accuracy: 0.1963 Epoch 159/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0405 - accuracy: 0.9853 - val_loss: 8.4675 - val_accuracy: 0.2630 Epoch 160/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0220 - accuracy: 0.9932 - val_loss: 11.1585 - val_accuracy: 0.2259 Epoch 161/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0063 - accuracy: 0.9990 - val_loss: 7.9327 - val_accuracy: 0.2519 Epoch 162/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0227 - accuracy: 0.9911 - val_loss: 9.4510 - val_accuracy: 0.2667 Epoch 163/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0235 - accuracy: 0.9927 - val_loss: 9.0816 - val_accuracy: 0.2185 Epoch 164/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0243 - accuracy: 0.9916 - val_loss: 9.8764 - val_accuracy: 0.2444 Epoch 165/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0428 - accuracy: 0.9859 - val_loss: 7.9837 - val_accuracy: 0.2333 Epoch 166/200 60/60 [==============================] - 1s 16ms/step - loss: 0.0226 - accuracy: 0.9901 - val_loss: 8.6388 - val_accuracy: 0.2222 Epoch 167/200 60/60 [==============================] - 1s 16ms/step - loss: 0.0281 - accuracy: 0.9895 - val_loss: 10.7714 - val_accuracy: 0.2185 Epoch 168/200 60/60 [==============================] - 1s 16ms/step - loss: 0.0358 - accuracy: 0.9885 - val_loss: 9.2113 - val_accuracy: 0.2333 Epoch 169/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0239 - accuracy: 0.9921 - val_loss: 9.6287 - val_accuracy: 0.2296 Epoch 170/200 60/60 [==============================] - 1s 16ms/step - loss: 0.0202 - accuracy: 0.9937 - val_loss: 9.8159 - val_accuracy: 0.1926 Epoch 171/200 60/60 [==============================] - 1s 16ms/step - loss: 0.0175 - accuracy: 0.9958 - val_loss: 8.8896 - val_accuracy: 0.2333 Epoch 172/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0177 - accuracy: 0.9921 - val_loss: 9.5405 - val_accuracy: 0.2444 Epoch 173/200 60/60 [==============================] - 1s 22ms/step - loss: 0.0122 - accuracy: 0.9958 - val_loss: 8.2135 - val_accuracy: 0.2778 Epoch 174/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0177 - accuracy: 0.9932 - val_loss: 9.7219 - val_accuracy: 0.2407 Epoch 175/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0115 - accuracy: 0.9932 - val_loss: 9.2081 - val_accuracy: 0.2630 Epoch 176/200 60/60 [==============================] - 1s 16ms/step - loss: 0.0148 - accuracy: 0.9942 - val_loss: 9.4991 - val_accuracy: 0.2556 Epoch 177/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0105 - accuracy: 0.9974 - val_loss: 8.8239 - val_accuracy: 0.2370 Epoch 178/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0108 - accuracy: 0.9953 - val_loss: 8.0449 - val_accuracy: 0.2667 Epoch 179/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0150 - accuracy: 0.9953 - val_loss: 8.5395 - val_accuracy: 0.2222 Epoch 180/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0119 - accuracy: 0.9948 - val_loss: 8.4675 - val_accuracy: 0.2630 Epoch 181/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0123 - accuracy: 0.9953 - val_loss: 9.0671 - val_accuracy: 0.2444 Epoch 182/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0220 - accuracy: 0.9937 - val_loss: 9.6083 - val_accuracy: 0.2481 Epoch 183/200 60/60 [==============================] - 1s 16ms/step - loss: 0.0244 - accuracy: 0.9921 - val_loss: 9.3091 - val_accuracy: 0.2481 Epoch 184/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0468 - accuracy: 0.9853 - val_loss: 9.2000 - val_accuracy: 0.2630 Epoch 185/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0272 - accuracy: 0.9885 - val_loss: 9.6748 - val_accuracy: 0.2481 Epoch 186/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0114 - accuracy: 0.9963 - val_loss: 9.3982 - val_accuracy: 0.2333 Epoch 187/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0259 - accuracy: 0.9906 - val_loss: 7.7351 - val_accuracy: 0.2370 Epoch 188/200 60/60 [==============================] - 1s 16ms/step - loss: 0.0423 - accuracy: 0.9869 - val_loss: 8.9870 - val_accuracy: 0.1963 Epoch 189/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0311 - accuracy: 0.9864 - val_loss: 9.9383 - val_accuracy: 0.2333 Epoch 190/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0198 - accuracy: 0.9927 - val_loss: 9.5604 - val_accuracy: 0.2111 Epoch 191/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0367 - accuracy: 0.9880 - val_loss: 8.2163 - val_accuracy: 0.2556 Epoch 192/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0344 - accuracy: 0.9911 - val_loss: 8.4629 - val_accuracy: 0.2407 Epoch 193/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0212 - accuracy: 0.9916 - val_loss: 8.7020 - val_accuracy: 0.2556 Epoch 194/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0277 - accuracy: 0.9901 - val_loss: 8.9758 - val_accuracy: 0.2519 Epoch 195/200 60/60 [==============================] - 1s 18ms/step - loss: 0.0192 - accuracy: 0.9932 - val_loss: 9.1250 - val_accuracy: 0.2556 Epoch 196/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0115 - accuracy: 0.9963 - val_loss: 9.1759 - val_accuracy: 0.2481 Epoch 197/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0115 - accuracy: 0.9969 - val_loss: 8.6791 - val_accuracy: 0.2667 Epoch 198/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0111 - accuracy: 0.9958 - val_loss: 10.2252 - val_accuracy: 0.2370 Epoch 199/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0104 - accuracy: 0.9963 - val_loss: 8.5708 - val_accuracy: 0.2111 Epoch 200/200 60/60 [==============================] - 1s 17ms/step - loss: 0.0169 - accuracy: 0.9942 - val_loss: 8.5415 - val_accuracy: 0.2481
########################################## step(1)
print("################## Training Including Validation ##################")
y_pred_vgg_train = np.argmax(model_Vgg.predict(resized_x_train_total), axis=1)
cm1 = ConfusionMatrix(y_train, y_pred_vgg_train)
PLOT_ConfusionMatrix(cm1,"Training including validation set")
model_vgg_report = classification_report(y_train, y_pred_vgg_train)
print(model_vgg_report)
print("################## Test set ##################")
y_pred_vgg_test = np.argmax(model_Vgg.predict(resized_x_test), axis=1)
cm1 = ConfusionMatrix(y_test, y_pred_vgg_test)
PLOT_ConfusionMatrix(cm1,"Test set")
model_vgg_report = classification_report(y_test, y_pred_vgg_test)
print(model_vgg_report)
################## Training Including Validation ##################
precision recall f1-score support
0 0.73 0.59 0.65 180
1 0.72 0.65 0.68 180
2 0.90 0.44 0.59 180
3 0.47 0.78 0.58 180
4 0.60 0.67 0.63 180
accuracy 0.63 900
macro avg 0.68 0.63 0.63 900
weighted avg 0.68 0.63 0.63 900
################## Test set ##################
precision recall f1-score support
0 0.30 0.15 0.20 20
1 0.17 0.25 0.20 20
2 0.40 0.20 0.27 20
3 0.24 0.45 0.32 20
4 0.21 0.15 0.18 20
accuracy 0.24 100
macro avg 0.27 0.24 0.23 100
weighted avg 0.27 0.24 0.23 100
Write your discussion here.
[1] N. Teimouri, M. Dyrmann, P. R. Nielsen, S. K. Mathiassen, G. J. Somerville, and R. N. Jørgensen, “Weed growth stage estimator using deep convolutional neural networks,” Sensors, vol. 18, no. 5, 2018.